00:00:00.000 Started by upstream project "autotest-per-patch" build number 127139 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.035 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.068 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.112 Using shallow fetch with depth 1 00:00:00.112 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.112 > git --version # timeout=10 00:00:00.174 > git --version # 'git version 2.39.2' 00:00:00.174 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.482 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.492 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.504 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:03.504 > git config core.sparsecheckout # timeout=10 00:00:03.515 > git read-tree -mu HEAD # timeout=10 00:00:03.533 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:03.566 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:03.566 > git rev-list --no-walk 86cd2acf6b4646bdb5ab15e0e320711d17ba4742 # timeout=10 00:00:03.657 [Pipeline] Start of Pipeline 00:00:03.673 [Pipeline] library 00:00:03.675 Loading library shm_lib@master 00:00:03.675 Library shm_lib@master is cached. Copying from home. 00:00:03.694 [Pipeline] node 00:00:03.703 Running on GP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.706 [Pipeline] { 00:00:03.716 [Pipeline] catchError 00:00:03.717 [Pipeline] { 00:00:03.731 [Pipeline] wrap 00:00:03.742 [Pipeline] { 00:00:03.751 [Pipeline] stage 00:00:03.753 [Pipeline] { (Prologue) 00:00:03.921 [Pipeline] sh 00:00:04.203 + logger -p user.info -t JENKINS-CI 00:00:04.220 [Pipeline] echo 00:00:04.222 Node: GP12 00:00:04.229 [Pipeline] sh 00:00:04.519 [Pipeline] setCustomBuildProperty 00:00:04.528 [Pipeline] echo 00:00:04.529 Cleanup processes 00:00:04.533 [Pipeline] sh 00:00:04.807 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.807 308652 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.817 [Pipeline] sh 00:00:05.094 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.094 ++ grep -v 'sudo pgrep' 00:00:05.094 ++ awk '{print $1}' 00:00:05.094 + sudo kill -9 00:00:05.094 + true 00:00:05.108 [Pipeline] cleanWs 00:00:05.116 [WS-CLEANUP] Deleting project workspace... 00:00:05.117 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.123 [WS-CLEANUP] done 00:00:05.126 [Pipeline] setCustomBuildProperty 00:00:05.137 [Pipeline] sh 00:00:05.417 + sudo git config --global --replace-all safe.directory '*' 00:00:05.502 [Pipeline] httpRequest 00:00:05.540 [Pipeline] echo 00:00:05.542 Sorcerer 10.211.164.101 is alive 00:00:05.548 [Pipeline] httpRequest 00:00:05.552 HttpMethod: GET 00:00:05.552 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.553 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.559 Response Code: HTTP/1.1 200 OK 00:00:05.560 Success: Status code 200 is in the accepted range: 200,404 00:00:05.560 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:16.516 [Pipeline] sh 00:00:16.799 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:16.815 [Pipeline] httpRequest 00:00:16.835 [Pipeline] echo 00:00:16.837 Sorcerer 10.211.164.101 is alive 00:00:16.845 [Pipeline] httpRequest 00:00:16.850 HttpMethod: GET 00:00:16.851 URL: http://10.211.164.101/packages/spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:00:16.851 Sending request to url: http://10.211.164.101/packages/spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:00:16.864 Response Code: HTTP/1.1 200 OK 00:00:16.864 Success: Status code 200 is in the accepted range: 200,404 00:00:16.865 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:02:00.729 [Pipeline] sh 00:02:01.013 + tar --no-same-owner -xf spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:02:04.306 [Pipeline] sh 00:02:04.591 + git -C spdk log --oneline -n5 00:02:04.591 c0d54772e test/common: Include test/nvme in the reap_spdk_processes() lookup 00:02:04.591 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:02:04.591 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:02:04.591 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:02:04.591 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:02:04.603 [Pipeline] } 00:02:04.623 [Pipeline] // stage 00:02:04.633 [Pipeline] stage 00:02:04.635 [Pipeline] { (Prepare) 00:02:04.653 [Pipeline] writeFile 00:02:04.673 [Pipeline] sh 00:02:04.959 + logger -p user.info -t JENKINS-CI 00:02:04.973 [Pipeline] sh 00:02:05.257 + logger -p user.info -t JENKINS-CI 00:02:05.269 [Pipeline] sh 00:02:05.552 + cat autorun-spdk.conf 00:02:05.552 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.552 SPDK_TEST_NVMF=1 00:02:05.552 SPDK_TEST_NVME_CLI=1 00:02:05.552 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.552 SPDK_TEST_NVMF_NICS=e810 00:02:05.552 SPDK_TEST_VFIOUSER=1 00:02:05.552 SPDK_RUN_UBSAN=1 00:02:05.552 NET_TYPE=phy 00:02:05.559 RUN_NIGHTLY=0 00:02:05.563 [Pipeline] readFile 00:02:05.586 [Pipeline] withEnv 00:02:05.588 [Pipeline] { 00:02:05.603 [Pipeline] sh 00:02:05.888 + set -ex 00:02:05.888 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:05.888 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.888 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.888 ++ SPDK_TEST_NVMF=1 00:02:05.888 ++ SPDK_TEST_NVME_CLI=1 00:02:05.888 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.888 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.888 ++ SPDK_TEST_VFIOUSER=1 00:02:05.888 ++ SPDK_RUN_UBSAN=1 00:02:05.888 ++ NET_TYPE=phy 00:02:05.888 ++ RUN_NIGHTLY=0 00:02:05.888 + case $SPDK_TEST_NVMF_NICS in 00:02:05.888 + DRIVERS=ice 00:02:05.888 + [[ tcp == \r\d\m\a ]] 00:02:05.888 + [[ -n ice ]] 00:02:05.888 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:05.888 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:09.179 rmmod: ERROR: Module irdma is not currently loaded 00:02:09.179 rmmod: ERROR: Module i40iw is not currently loaded 00:02:09.179 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:09.179 + true 00:02:09.179 + for D in $DRIVERS 00:02:09.179 + sudo modprobe ice 00:02:09.179 + exit 0 00:02:09.189 [Pipeline] } 00:02:09.207 [Pipeline] // withEnv 00:02:09.212 [Pipeline] } 00:02:09.229 [Pipeline] // stage 00:02:09.239 [Pipeline] catchError 00:02:09.241 [Pipeline] { 00:02:09.255 [Pipeline] timeout 00:02:09.255 Timeout set to expire in 50 min 00:02:09.257 [Pipeline] { 00:02:09.269 [Pipeline] stage 00:02:09.271 [Pipeline] { (Tests) 00:02:09.283 [Pipeline] sh 00:02:09.566 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.566 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.566 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.566 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:09.566 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.566 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:09.566 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:09.566 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:09.566 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:09.567 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:09.567 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:09.567 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.567 + source /etc/os-release 00:02:09.567 ++ NAME='Fedora Linux' 00:02:09.567 ++ VERSION='38 (Cloud Edition)' 00:02:09.567 ++ ID=fedora 00:02:09.567 ++ VERSION_ID=38 00:02:09.567 ++ VERSION_CODENAME= 00:02:09.567 ++ PLATFORM_ID=platform:f38 00:02:09.567 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:09.567 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.567 ++ LOGO=fedora-logo-icon 00:02:09.567 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:09.567 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.567 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:09.567 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.567 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.567 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.567 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:09.567 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.567 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:09.567 ++ SUPPORT_END=2024-05-14 00:02:09.567 ++ VARIANT='Cloud Edition' 00:02:09.567 ++ VARIANT_ID=cloud 00:02:09.567 + uname -a 00:02:09.567 Linux spdk-gp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:09.567 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:10.502 Hugepages 00:02:10.502 node hugesize free / total 00:02:10.502 node0 1048576kB 0 / 0 00:02:10.502 node0 2048kB 0 / 0 00:02:10.502 node1 1048576kB 0 / 0 00:02:10.502 node1 2048kB 0 / 0 00:02:10.502 00:02:10.502 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.502 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:10.502 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:10.502 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:10.502 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:10.502 + rm -f /tmp/spdk-ld-path 00:02:10.502 + source autorun-spdk.conf 00:02:10.502 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.502 ++ SPDK_TEST_NVMF=1 00:02:10.502 ++ SPDK_TEST_NVME_CLI=1 00:02:10.502 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.502 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.502 ++ SPDK_TEST_VFIOUSER=1 00:02:10.502 ++ SPDK_RUN_UBSAN=1 00:02:10.502 ++ NET_TYPE=phy 00:02:10.502 ++ RUN_NIGHTLY=0 00:02:10.502 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.502 + [[ -n '' ]] 00:02:10.502 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.502 + for M in /var/spdk/build-*-manifest.txt 00:02:10.502 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.502 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.502 + for M in /var/spdk/build-*-manifest.txt 00:02:10.502 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.502 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.502 ++ uname 00:02:10.502 + [[ Linux == \L\i\n\u\x ]] 00:02:10.502 + sudo dmesg -T 00:02:10.502 + sudo dmesg --clear 00:02:10.767 + dmesg_pid=309969 00:02:10.767 + [[ Fedora Linux == FreeBSD ]] 00:02:10.767 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.767 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.767 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.767 + sudo dmesg -Tw 00:02:10.767 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.767 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.767 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.767 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.767 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.767 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.767 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.767 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.767 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.767 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.767 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.767 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.767 Test configuration: 00:02:10.767 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.767 SPDK_TEST_NVMF=1 00:02:10.767 SPDK_TEST_NVME_CLI=1 00:02:10.767 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.767 SPDK_TEST_NVMF_NICS=e810 00:02:10.767 SPDK_TEST_VFIOUSER=1 00:02:10.767 SPDK_RUN_UBSAN=1 00:02:10.767 NET_TYPE=phy 00:02:10.767 RUN_NIGHTLY=0 09:16:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.767 09:16:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.767 09:16:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.767 09:16:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.767 09:16:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.767 09:16:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.767 09:16:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.767 09:16:43 -- paths/export.sh@5 -- $ export PATH 00:02:10.767 09:16:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.767 09:16:43 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.767 09:16:43 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:10.767 09:16:43 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721891803.XXXXXX 00:02:10.767 09:16:43 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721891803.iD0Ir3 00:02:10.767 09:16:43 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:10.767 09:16:43 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:10.767 09:16:43 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:10.767 09:16:43 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.767 09:16:43 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.767 09:16:43 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:10.767 09:16:43 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:10.767 09:16:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.767 09:16:43 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:10.767 09:16:43 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:10.767 09:16:43 -- pm/common@17 -- $ local monitor 00:02:10.767 09:16:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.767 09:16:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.767 09:16:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.767 09:16:43 -- pm/common@21 -- $ date +%s 00:02:10.767 09:16:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.767 09:16:43 -- pm/common@21 -- $ date +%s 00:02:10.767 09:16:43 -- pm/common@25 -- $ sleep 1 00:02:10.767 09:16:43 -- pm/common@21 -- $ date +%s 00:02:10.767 09:16:43 -- pm/common@21 -- $ date +%s 00:02:10.767 09:16:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891803 00:02:10.767 09:16:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891803 00:02:10.767 09:16:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891803 00:02:10.767 09:16:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891803 00:02:10.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891803_collect-vmstat.pm.log 00:02:10.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891803_collect-cpu-load.pm.log 00:02:10.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891803_collect-cpu-temp.pm.log 00:02:10.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891803_collect-bmc-pm.bmc.pm.log 00:02:11.703 09:16:44 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:11.703 09:16:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.704 09:16:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.704 09:16:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.704 09:16:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.704 Thu Jul 25 07:16:44 AM UTC 2024 00:02:11.704 09:16:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.704 v24.09-pre-310-gc0d54772e 00:02:11.704 09:16:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.704 09:16:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.704 09:16:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.704 09:16:44 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:11.704 09:16:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:11.704 09:16:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.704 ************************************ 00:02:11.704 START TEST ubsan 00:02:11.704 ************************************ 00:02:11.704 09:16:44 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:11.704 using ubsan 00:02:11.704 00:02:11.704 real 0m0.000s 00:02:11.704 user 0m0.000s 00:02:11.704 sys 0m0.000s 00:02:11.704 09:16:44 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.704 09:16:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.704 ************************************ 00:02:11.704 END TEST ubsan 00:02:11.704 ************************************ 00:02:11.704 09:16:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.704 09:16:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.704 09:16:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.704 09:16:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.704 09:16:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.704 09:16:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.704 09:16:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.704 09:16:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.704 09:16:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:11.961 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:11.961 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:12.220 Using 'verbs' RDMA provider 00:02:22.770 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:32.746 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:32.746 Creating mk/config.mk...done. 00:02:32.746 Creating mk/cc.flags.mk...done. 00:02:32.746 Type 'make' to build. 00:02:32.746 09:17:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:32.746 09:17:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:32.746 09:17:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:32.746 09:17:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.746 ************************************ 00:02:32.746 START TEST make 00:02:32.746 ************************************ 00:02:32.746 09:17:04 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:32.746 make[1]: Nothing to be done for 'all'. 00:02:34.137 The Meson build system 00:02:34.137 Version: 1.3.1 00:02:34.137 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:34.137 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:34.137 Build type: native build 00:02:34.137 Project name: libvfio-user 00:02:34.137 Project version: 0.0.1 00:02:34.137 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:34.137 C linker for the host machine: cc ld.bfd 2.39-16 00:02:34.137 Host machine cpu family: x86_64 00:02:34.137 Host machine cpu: x86_64 00:02:34.137 Run-time dependency threads found: YES 00:02:34.137 Library dl found: YES 00:02:34.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:34.137 Run-time dependency json-c found: YES 0.17 00:02:34.137 Run-time dependency cmocka found: YES 1.1.7 00:02:34.137 Program pytest-3 found: NO 00:02:34.137 Program flake8 found: NO 00:02:34.137 Program misspell-fixer found: NO 00:02:34.137 Program restructuredtext-lint found: NO 00:02:34.137 Program valgrind found: YES (/usr/bin/valgrind) 00:02:34.137 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.137 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.137 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.137 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:34.137 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:34.137 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:34.137 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:34.137 Build targets in project: 8 00:02:34.137 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:34.137 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:34.137 00:02:34.137 libvfio-user 0.0.1 00:02:34.137 00:02:34.137 User defined options 00:02:34.137 buildtype : debug 00:02:34.137 default_library: shared 00:02:34.137 libdir : /usr/local/lib 00:02:34.137 00:02:34.137 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.711 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:34.711 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:34.971 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:34.971 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:34.971 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:34.971 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:34.971 [6/37] Compiling C object samples/null.p/null.c.o 00:02:34.971 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:34.971 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:34.971 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:34.971 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:34.971 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:34.971 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:34.971 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:34.971 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:34.971 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:34.971 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:34.971 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:34.971 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:34.971 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:34.971 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:34.971 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:34.971 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:34.971 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:34.971 [24/37] Compiling C object samples/server.p/server.c.o 00:02:35.231 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:35.231 [26/37] Compiling C object samples/client.p/client.c.o 00:02:35.231 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:35.231 [28/37] Linking target samples/client 00:02:35.231 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:35.231 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:35.231 [31/37] Linking target test/unit_tests 00:02:35.497 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:35.497 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:35.497 [34/37] Linking target samples/server 00:02:35.497 [35/37] Linking target samples/lspci 00:02:35.497 [36/37] Linking target samples/gpio-pci-idio-16 00:02:35.497 [37/37] Linking target samples/null 00:02:35.497 INFO: autodetecting backend as ninja 00:02:35.497 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.755 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.330 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.330 ninja: no work to do. 00:02:40.513 The Meson build system 00:02:40.513 Version: 1.3.1 00:02:40.513 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:40.513 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:40.513 Build type: native build 00:02:40.513 Program cat found: YES (/usr/bin/cat) 00:02:40.513 Project name: DPDK 00:02:40.513 Project version: 24.03.0 00:02:40.513 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:40.513 C linker for the host machine: cc ld.bfd 2.39-16 00:02:40.513 Host machine cpu family: x86_64 00:02:40.513 Host machine cpu: x86_64 00:02:40.513 Message: ## Building in Developer Mode ## 00:02:40.513 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.513 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:40.513 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.513 Program python3 found: YES (/usr/bin/python3) 00:02:40.513 Program cat found: YES (/usr/bin/cat) 00:02:40.513 Compiler for C supports arguments -march=native: YES 00:02:40.513 Checking for size of "void *" : 8 00:02:40.513 Checking for size of "void *" : 8 (cached) 00:02:40.513 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:40.513 Library m found: YES 00:02:40.513 Library numa found: YES 00:02:40.513 Has header "numaif.h" : YES 00:02:40.513 Library fdt found: NO 00:02:40.513 Library execinfo found: NO 00:02:40.513 Has header "execinfo.h" : YES 00:02:40.513 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:40.513 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.513 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.513 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.513 Run-time dependency openssl found: YES 3.0.9 00:02:40.513 Run-time dependency libpcap found: YES 1.10.4 00:02:40.513 Has header "pcap.h" with dependency libpcap: YES 00:02:40.513 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.513 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.513 Compiler for C supports arguments -Wformat: YES 00:02:40.513 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.513 Compiler for C supports arguments -Wformat-security: NO 00:02:40.513 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.513 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.513 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.513 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.513 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.513 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.513 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.513 Compiler for C supports arguments -Wundef: YES 00:02:40.513 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.513 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.513 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.513 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.513 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.513 Program objdump found: YES (/usr/bin/objdump) 00:02:40.513 Compiler for C supports arguments -mavx512f: YES 00:02:40.513 Checking if "AVX512 checking" compiles: YES 00:02:40.513 Fetching value of define "__SSE4_2__" : 1 00:02:40.513 Fetching value of define "__AES__" : 1 00:02:40.513 Fetching value of define "__AVX__" : 1 00:02:40.513 Fetching value of define "__AVX2__" : (undefined) 00:02:40.513 Fetching value of define "__AVX512BW__" : (undefined) 00:02:40.513 Fetching value of define "__AVX512CD__" : (undefined) 00:02:40.513 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:40.513 Fetching value of define "__AVX512F__" : (undefined) 00:02:40.513 Fetching value of define "__AVX512VL__" : (undefined) 00:02:40.513 Fetching value of define "__PCLMUL__" : 1 00:02:40.513 Fetching value of define "__RDRND__" : 1 00:02:40.513 Fetching value of define "__RDSEED__" : (undefined) 00:02:40.513 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:40.513 Fetching value of define "__znver1__" : (undefined) 00:02:40.513 Fetching value of define "__znver2__" : (undefined) 00:02:40.513 Fetching value of define "__znver3__" : (undefined) 00:02:40.513 Fetching value of define "__znver4__" : (undefined) 00:02:40.513 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.513 Message: lib/log: Defining dependency "log" 00:02:40.513 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.513 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.514 Checking for function "getentropy" : NO 00:02:40.514 Message: lib/eal: Defining dependency "eal" 00:02:40.514 Message: lib/ring: Defining dependency "ring" 00:02:40.514 Message: lib/rcu: Defining dependency "rcu" 00:02:40.514 Message: lib/mempool: Defining dependency "mempool" 00:02:40.514 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.514 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.514 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:40.514 Compiler for C supports arguments -mpclmul: YES 00:02:40.514 Compiler for C supports arguments -maes: YES 00:02:40.514 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.514 Compiler for C supports arguments -mavx512bw: YES 00:02:40.514 Compiler for C supports arguments -mavx512dq: YES 00:02:40.514 Compiler for C supports arguments -mavx512vl: YES 00:02:40.514 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.514 Compiler for C supports arguments -mavx2: YES 00:02:40.514 Compiler for C supports arguments -mavx: YES 00:02:40.514 Message: lib/net: Defining dependency "net" 00:02:40.514 Message: lib/meter: Defining dependency "meter" 00:02:40.514 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.514 Message: lib/pci: Defining dependency "pci" 00:02:40.514 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.514 Message: lib/hash: Defining dependency "hash" 00:02:40.514 Message: lib/timer: Defining dependency "timer" 00:02:40.514 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.514 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.514 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.514 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.514 Message: lib/power: Defining dependency "power" 00:02:40.514 Message: lib/reorder: Defining dependency "reorder" 00:02:40.514 Message: lib/security: Defining dependency "security" 00:02:40.514 Has header "linux/userfaultfd.h" : YES 00:02:40.514 Has header "linux/vduse.h" : YES 00:02:40.514 Message: lib/vhost: Defining dependency "vhost" 00:02:40.514 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.514 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.514 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.514 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.514 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:40.514 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:40.514 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:40.514 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:40.514 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:40.514 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:40.514 Program doxygen found: YES (/usr/bin/doxygen) 00:02:40.514 Configuring doxy-api-html.conf using configuration 00:02:40.514 Configuring doxy-api-man.conf using configuration 00:02:40.514 Program mandb found: YES (/usr/bin/mandb) 00:02:40.514 Program sphinx-build found: NO 00:02:40.514 Configuring rte_build_config.h using configuration 00:02:40.514 Message: 00:02:40.514 ================= 00:02:40.514 Applications Enabled 00:02:40.514 ================= 00:02:40.514 00:02:40.514 apps: 00:02:40.514 00:02:40.514 00:02:40.514 Message: 00:02:40.514 ================= 00:02:40.514 Libraries Enabled 00:02:40.514 ================= 00:02:40.514 00:02:40.514 libs: 00:02:40.514 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:40.514 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:40.514 cryptodev, dmadev, power, reorder, security, vhost, 00:02:40.514 00:02:40.514 Message: 00:02:40.514 =============== 00:02:40.514 Drivers Enabled 00:02:40.514 =============== 00:02:40.514 00:02:40.514 common: 00:02:40.514 00:02:40.514 bus: 00:02:40.514 pci, vdev, 00:02:40.514 mempool: 00:02:40.514 ring, 00:02:40.514 dma: 00:02:40.514 00:02:40.514 net: 00:02:40.514 00:02:40.514 crypto: 00:02:40.514 00:02:40.514 compress: 00:02:40.514 00:02:40.514 vdpa: 00:02:40.514 00:02:40.514 00:02:40.514 Message: 00:02:40.514 ================= 00:02:40.514 Content Skipped 00:02:40.514 ================= 00:02:40.514 00:02:40.514 apps: 00:02:40.514 dumpcap: explicitly disabled via build config 00:02:40.514 graph: explicitly disabled via build config 00:02:40.514 pdump: explicitly disabled via build config 00:02:40.514 proc-info: explicitly disabled via build config 00:02:40.514 test-acl: explicitly disabled via build config 00:02:40.514 test-bbdev: explicitly disabled via build config 00:02:40.514 test-cmdline: explicitly disabled via build config 00:02:40.514 test-compress-perf: explicitly disabled via build config 00:02:40.514 test-crypto-perf: explicitly disabled via build config 00:02:40.514 test-dma-perf: explicitly disabled via build config 00:02:40.514 test-eventdev: explicitly disabled via build config 00:02:40.514 test-fib: explicitly disabled via build config 00:02:40.514 test-flow-perf: explicitly disabled via build config 00:02:40.514 test-gpudev: explicitly disabled via build config 00:02:40.514 test-mldev: explicitly disabled via build config 00:02:40.514 test-pipeline: explicitly disabled via build config 00:02:40.514 test-pmd: explicitly disabled via build config 00:02:40.514 test-regex: explicitly disabled via build config 00:02:40.514 test-sad: explicitly disabled via build config 00:02:40.514 test-security-perf: explicitly disabled via build config 00:02:40.514 00:02:40.514 libs: 00:02:40.514 argparse: explicitly disabled via build config 00:02:40.514 metrics: explicitly disabled via build config 00:02:40.514 acl: explicitly disabled via build config 00:02:40.514 bbdev: explicitly disabled via build config 00:02:40.514 bitratestats: explicitly disabled via build config 00:02:40.514 bpf: explicitly disabled via build config 00:02:40.514 cfgfile: explicitly disabled via build config 00:02:40.514 distributor: explicitly disabled via build config 00:02:40.514 efd: explicitly disabled via build config 00:02:40.514 eventdev: explicitly disabled via build config 00:02:40.514 dispatcher: explicitly disabled via build config 00:02:40.514 gpudev: explicitly disabled via build config 00:02:40.514 gro: explicitly disabled via build config 00:02:40.514 gso: explicitly disabled via build config 00:02:40.514 ip_frag: explicitly disabled via build config 00:02:40.514 jobstats: explicitly disabled via build config 00:02:40.514 latencystats: explicitly disabled via build config 00:02:40.514 lpm: explicitly disabled via build config 00:02:40.514 member: explicitly disabled via build config 00:02:40.514 pcapng: explicitly disabled via build config 00:02:40.514 rawdev: explicitly disabled via build config 00:02:40.514 regexdev: explicitly disabled via build config 00:02:40.514 mldev: explicitly disabled via build config 00:02:40.514 rib: explicitly disabled via build config 00:02:40.514 sched: explicitly disabled via build config 00:02:40.514 stack: explicitly disabled via build config 00:02:40.514 ipsec: explicitly disabled via build config 00:02:40.514 pdcp: explicitly disabled via build config 00:02:40.514 fib: explicitly disabled via build config 00:02:40.514 port: explicitly disabled via build config 00:02:40.514 pdump: explicitly disabled via build config 00:02:40.514 table: explicitly disabled via build config 00:02:40.514 pipeline: explicitly disabled via build config 00:02:40.514 graph: explicitly disabled via build config 00:02:40.514 node: explicitly disabled via build config 00:02:40.514 00:02:40.514 drivers: 00:02:40.514 common/cpt: not in enabled drivers build config 00:02:40.514 common/dpaax: not in enabled drivers build config 00:02:40.514 common/iavf: not in enabled drivers build config 00:02:40.514 common/idpf: not in enabled drivers build config 00:02:40.514 common/ionic: not in enabled drivers build config 00:02:40.514 common/mvep: not in enabled drivers build config 00:02:40.514 common/octeontx: not in enabled drivers build config 00:02:40.514 bus/auxiliary: not in enabled drivers build config 00:02:40.514 bus/cdx: not in enabled drivers build config 00:02:40.514 bus/dpaa: not in enabled drivers build config 00:02:40.514 bus/fslmc: not in enabled drivers build config 00:02:40.514 bus/ifpga: not in enabled drivers build config 00:02:40.514 bus/platform: not in enabled drivers build config 00:02:40.514 bus/uacce: not in enabled drivers build config 00:02:40.514 bus/vmbus: not in enabled drivers build config 00:02:40.514 common/cnxk: not in enabled drivers build config 00:02:40.514 common/mlx5: not in enabled drivers build config 00:02:40.514 common/nfp: not in enabled drivers build config 00:02:40.514 common/nitrox: not in enabled drivers build config 00:02:40.514 common/qat: not in enabled drivers build config 00:02:40.514 common/sfc_efx: not in enabled drivers build config 00:02:40.514 mempool/bucket: not in enabled drivers build config 00:02:40.514 mempool/cnxk: not in enabled drivers build config 00:02:40.514 mempool/dpaa: not in enabled drivers build config 00:02:40.514 mempool/dpaa2: not in enabled drivers build config 00:02:40.514 mempool/octeontx: not in enabled drivers build config 00:02:40.514 mempool/stack: not in enabled drivers build config 00:02:40.514 dma/cnxk: not in enabled drivers build config 00:02:40.514 dma/dpaa: not in enabled drivers build config 00:02:40.514 dma/dpaa2: not in enabled drivers build config 00:02:40.514 dma/hisilicon: not in enabled drivers build config 00:02:40.514 dma/idxd: not in enabled drivers build config 00:02:40.514 dma/ioat: not in enabled drivers build config 00:02:40.514 dma/skeleton: not in enabled drivers build config 00:02:40.514 net/af_packet: not in enabled drivers build config 00:02:40.514 net/af_xdp: not in enabled drivers build config 00:02:40.514 net/ark: not in enabled drivers build config 00:02:40.514 net/atlantic: not in enabled drivers build config 00:02:40.514 net/avp: not in enabled drivers build config 00:02:40.514 net/axgbe: not in enabled drivers build config 00:02:40.515 net/bnx2x: not in enabled drivers build config 00:02:40.515 net/bnxt: not in enabled drivers build config 00:02:40.515 net/bonding: not in enabled drivers build config 00:02:40.515 net/cnxk: not in enabled drivers build config 00:02:40.515 net/cpfl: not in enabled drivers build config 00:02:40.515 net/cxgbe: not in enabled drivers build config 00:02:40.515 net/dpaa: not in enabled drivers build config 00:02:40.515 net/dpaa2: not in enabled drivers build config 00:02:40.515 net/e1000: not in enabled drivers build config 00:02:40.515 net/ena: not in enabled drivers build config 00:02:40.515 net/enetc: not in enabled drivers build config 00:02:40.515 net/enetfec: not in enabled drivers build config 00:02:40.515 net/enic: not in enabled drivers build config 00:02:40.515 net/failsafe: not in enabled drivers build config 00:02:40.515 net/fm10k: not in enabled drivers build config 00:02:40.515 net/gve: not in enabled drivers build config 00:02:40.515 net/hinic: not in enabled drivers build config 00:02:40.515 net/hns3: not in enabled drivers build config 00:02:40.515 net/i40e: not in enabled drivers build config 00:02:40.515 net/iavf: not in enabled drivers build config 00:02:40.515 net/ice: not in enabled drivers build config 00:02:40.515 net/idpf: not in enabled drivers build config 00:02:40.515 net/igc: not in enabled drivers build config 00:02:40.515 net/ionic: not in enabled drivers build config 00:02:40.515 net/ipn3ke: not in enabled drivers build config 00:02:40.515 net/ixgbe: not in enabled drivers build config 00:02:40.515 net/mana: not in enabled drivers build config 00:02:40.515 net/memif: not in enabled drivers build config 00:02:40.515 net/mlx4: not in enabled drivers build config 00:02:40.515 net/mlx5: not in enabled drivers build config 00:02:40.515 net/mvneta: not in enabled drivers build config 00:02:40.515 net/mvpp2: not in enabled drivers build config 00:02:40.515 net/netvsc: not in enabled drivers build config 00:02:40.515 net/nfb: not in enabled drivers build config 00:02:40.515 net/nfp: not in enabled drivers build config 00:02:40.515 net/ngbe: not in enabled drivers build config 00:02:40.515 net/null: not in enabled drivers build config 00:02:40.515 net/octeontx: not in enabled drivers build config 00:02:40.515 net/octeon_ep: not in enabled drivers build config 00:02:40.515 net/pcap: not in enabled drivers build config 00:02:40.515 net/pfe: not in enabled drivers build config 00:02:40.515 net/qede: not in enabled drivers build config 00:02:40.515 net/ring: not in enabled drivers build config 00:02:40.515 net/sfc: not in enabled drivers build config 00:02:40.515 net/softnic: not in enabled drivers build config 00:02:40.515 net/tap: not in enabled drivers build config 00:02:40.515 net/thunderx: not in enabled drivers build config 00:02:40.515 net/txgbe: not in enabled drivers build config 00:02:40.515 net/vdev_netvsc: not in enabled drivers build config 00:02:40.515 net/vhost: not in enabled drivers build config 00:02:40.515 net/virtio: not in enabled drivers build config 00:02:40.515 net/vmxnet3: not in enabled drivers build config 00:02:40.515 raw/*: missing internal dependency, "rawdev" 00:02:40.515 crypto/armv8: not in enabled drivers build config 00:02:40.515 crypto/bcmfs: not in enabled drivers build config 00:02:40.515 crypto/caam_jr: not in enabled drivers build config 00:02:40.515 crypto/ccp: not in enabled drivers build config 00:02:40.515 crypto/cnxk: not in enabled drivers build config 00:02:40.515 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.515 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.515 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.515 crypto/mlx5: not in enabled drivers build config 00:02:40.515 crypto/mvsam: not in enabled drivers build config 00:02:40.515 crypto/nitrox: not in enabled drivers build config 00:02:40.515 crypto/null: not in enabled drivers build config 00:02:40.515 crypto/octeontx: not in enabled drivers build config 00:02:40.515 crypto/openssl: not in enabled drivers build config 00:02:40.515 crypto/scheduler: not in enabled drivers build config 00:02:40.515 crypto/uadk: not in enabled drivers build config 00:02:40.515 crypto/virtio: not in enabled drivers build config 00:02:40.515 compress/isal: not in enabled drivers build config 00:02:40.515 compress/mlx5: not in enabled drivers build config 00:02:40.515 compress/nitrox: not in enabled drivers build config 00:02:40.515 compress/octeontx: not in enabled drivers build config 00:02:40.515 compress/zlib: not in enabled drivers build config 00:02:40.515 regex/*: missing internal dependency, "regexdev" 00:02:40.515 ml/*: missing internal dependency, "mldev" 00:02:40.515 vdpa/ifc: not in enabled drivers build config 00:02:40.515 vdpa/mlx5: not in enabled drivers build config 00:02:40.515 vdpa/nfp: not in enabled drivers build config 00:02:40.515 vdpa/sfc: not in enabled drivers build config 00:02:40.515 event/*: missing internal dependency, "eventdev" 00:02:40.515 baseband/*: missing internal dependency, "bbdev" 00:02:40.515 gpu/*: missing internal dependency, "gpudev" 00:02:40.515 00:02:40.515 00:02:41.081 Build targets in project: 85 00:02:41.081 00:02:41.081 DPDK 24.03.0 00:02:41.081 00:02:41.081 User defined options 00:02:41.081 buildtype : debug 00:02:41.081 default_library : shared 00:02:41.081 libdir : lib 00:02:41.081 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:41.081 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:41.081 c_link_args : 00:02:41.081 cpu_instruction_set: native 00:02:41.081 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:41.081 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:41.081 enable_docs : false 00:02:41.081 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.081 enable_kmods : false 00:02:41.081 max_lcores : 128 00:02:41.081 tests : false 00:02:41.081 00:02:41.081 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.346 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:41.346 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.346 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.346 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.346 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.346 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.346 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.347 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.347 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.347 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.610 [10/268] Linking static target lib/librte_kvargs.a 00:02:41.610 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.610 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.610 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.610 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.610 [15/268] Linking static target lib/librte_log.a 00:02:41.610 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.182 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.183 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.183 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.183 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.183 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.183 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.183 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.183 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.448 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.448 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.448 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.448 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.448 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.448 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.448 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.448 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.448 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.448 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.448 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.448 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.448 [37/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.448 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.448 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.448 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.448 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.448 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.448 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.448 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.448 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.448 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.448 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.448 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.448 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.448 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.448 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.448 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.448 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.448 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.448 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.448 [56/268] Linking static target lib/librte_telemetry.a 00:02:42.448 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.448 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.706 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.706 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.706 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.706 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.706 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.706 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.706 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.706 [66/268] Linking target lib/librte_log.so.24.1 00:02:42.968 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.968 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.968 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.968 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.968 [71/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.968 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.968 [73/268] Linking static target lib/librte_pci.a 00:02:43.230 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:43.230 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.230 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.230 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.230 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.230 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.230 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.230 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.230 [82/268] Linking target lib/librte_kvargs.so.24.1 00:02:43.230 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.230 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.230 [85/268] Linking static target lib/librte_ring.a 00:02:43.230 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.230 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.230 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.230 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.230 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.230 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.230 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.230 [93/268] Linking static target lib/librte_meter.a 00:02:43.230 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.491 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.491 [96/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.491 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.491 [98/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.491 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.491 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.491 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:43.491 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.491 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.491 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.491 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.491 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.491 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.491 [108/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:43.491 [109/268] Linking static target lib/librte_mempool.a 00:02:43.491 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.491 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.491 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.491 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.491 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:43.491 [115/268] Linking static target lib/librte_rcu.a 00:02:43.491 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.491 [117/268] Linking static target lib/librte_eal.a 00:02:43.491 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.491 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.491 [120/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.827 [121/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.827 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.827 [123/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.827 [124/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.827 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.827 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.827 [127/268] Linking target lib/librte_telemetry.so.24.1 00:02:43.827 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.827 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.827 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:43.827 [131/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.827 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.827 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.827 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:43.827 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.089 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.090 [137/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.090 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.090 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.090 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.090 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.090 [142/268] Linking static target lib/librte_net.a 00:02:44.090 [143/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.090 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.352 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.352 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.352 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:44.352 [148/268] Linking static target lib/librte_cmdline.a 00:02:44.352 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.352 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.352 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.352 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.352 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.352 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.352 [155/268] Linking static target lib/librte_timer.a 00:02:44.612 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.612 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.612 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.612 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.612 [160/268] Linking static target lib/librte_dmadev.a 00:02:44.612 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.612 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.612 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.612 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.612 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.612 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.870 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.870 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.870 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.870 [170/268] Linking static target lib/librte_power.a 00:02:44.870 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.870 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.870 [173/268] Linking static target lib/librte_compressdev.a 00:02:44.870 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.870 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.870 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.870 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.870 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.870 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.870 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.870 [181/268] Linking static target lib/librte_hash.a 00:02:44.870 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.870 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.870 [184/268] Linking static target lib/librte_reorder.a 00:02:45.129 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.129 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.129 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.129 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.129 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.129 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.129 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.129 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.129 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.129 [194/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.129 [195/268] Linking static target lib/librte_mbuf.a 00:02:45.129 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.129 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.129 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.129 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.129 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.129 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.129 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.129 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:45.129 [204/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.387 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.387 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.387 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.387 [208/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.387 [209/268] Linking static target lib/librte_security.a 00:02:45.387 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.387 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.387 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:45.387 [213/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.387 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.387 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.387 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.387 [217/268] Linking static target drivers/librte_mempool_ring.a 00:02:45.387 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.387 [219/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.645 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.645 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.645 [222/268] Linking static target lib/librte_ethdev.a 00:02:45.645 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.645 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:45.645 [225/268] Linking static target lib/librte_cryptodev.a 00:02:45.645 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.018 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.390 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:49.765 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.765 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.765 [231/268] Linking target lib/librte_eal.so.24.1 00:02:50.024 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.024 [233/268] Linking target lib/librte_pci.so.24.1 00:02:50.024 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:50.024 [235/268] Linking target lib/librte_ring.so.24.1 00:02:50.024 [236/268] Linking target lib/librte_timer.so.24.1 00:02:50.024 [237/268] Linking target lib/librte_meter.so.24.1 00:02:50.024 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:50.024 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:50.282 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:50.282 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:50.282 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:50.282 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:50.282 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:50.282 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:50.282 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:50.282 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:50.282 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:50.282 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:50.282 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:50.540 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:50.540 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:50.540 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:50.540 [254/268] Linking target lib/librte_net.so.24.1 00:02:50.540 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:50.540 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:50.540 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:50.798 [258/268] Linking target lib/librte_security.so.24.1 00:02:50.798 [259/268] Linking target lib/librte_hash.so.24.1 00:02:50.798 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:50.798 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:50.798 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:50.798 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:50.798 [264/268] Linking target lib/librte_power.so.24.1 00:02:53.327 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.327 [266/268] Linking static target lib/librte_vhost.a 00:02:54.262 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.520 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:54.520 INFO: autodetecting backend as ninja 00:02:54.520 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:55.453 CC lib/ut_mock/mock.o 00:02:55.453 CC lib/log/log.o 00:02:55.453 CC lib/log/log_flags.o 00:02:55.453 CC lib/log/log_deprecated.o 00:02:55.453 CC lib/ut/ut.o 00:02:55.453 LIB libspdk_log.a 00:02:55.453 LIB libspdk_ut.a 00:02:55.453 LIB libspdk_ut_mock.a 00:02:55.453 SO libspdk_ut.so.2.0 00:02:55.453 SO libspdk_ut_mock.so.6.0 00:02:55.453 SO libspdk_log.so.7.0 00:02:55.711 SYMLINK libspdk_ut.so 00:02:55.711 SYMLINK libspdk_ut_mock.so 00:02:55.711 SYMLINK libspdk_log.so 00:02:55.711 CC lib/ioat/ioat.o 00:02:55.711 CC lib/util/base64.o 00:02:55.711 CC lib/dma/dma.o 00:02:55.711 CC lib/util/bit_array.o 00:02:55.711 CXX lib/trace_parser/trace.o 00:02:55.711 CC lib/util/cpuset.o 00:02:55.711 CC lib/util/crc16.o 00:02:55.711 CC lib/util/crc32.o 00:02:55.711 CC lib/util/crc32c.o 00:02:55.711 CC lib/util/crc32_ieee.o 00:02:55.711 CC lib/util/crc64.o 00:02:55.711 CC lib/util/dif.o 00:02:55.711 CC lib/util/fd.o 00:02:55.711 CC lib/util/fd_group.o 00:02:55.711 CC lib/util/file.o 00:02:55.711 CC lib/util/hexlify.o 00:02:55.711 CC lib/util/iov.o 00:02:55.711 CC lib/util/math.o 00:02:55.711 CC lib/util/net.o 00:02:55.711 CC lib/util/pipe.o 00:02:55.711 CC lib/util/strerror_tls.o 00:02:55.711 CC lib/util/string.o 00:02:55.711 CC lib/util/uuid.o 00:02:55.711 CC lib/util/zipf.o 00:02:55.711 CC lib/util/xor.o 00:02:55.969 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.969 CC lib/vfio_user/host/vfio_user.o 00:02:55.969 LIB libspdk_dma.a 00:02:55.969 SO libspdk_dma.so.4.0 00:02:55.969 SYMLINK libspdk_dma.so 00:02:56.226 LIB libspdk_ioat.a 00:02:56.226 LIB libspdk_vfio_user.a 00:02:56.226 SO libspdk_ioat.so.7.0 00:02:56.226 SO libspdk_vfio_user.so.5.0 00:02:56.226 SYMLINK libspdk_ioat.so 00:02:56.226 SYMLINK libspdk_vfio_user.so 00:02:56.226 LIB libspdk_util.a 00:02:56.484 SO libspdk_util.so.10.0 00:02:56.484 SYMLINK libspdk_util.so 00:02:56.743 CC lib/json/json_parse.o 00:02:56.743 CC lib/json/json_util.o 00:02:56.743 CC lib/vmd/vmd.o 00:02:56.743 CC lib/rdma_utils/rdma_utils.o 00:02:56.743 CC lib/conf/conf.o 00:02:56.743 CC lib/json/json_write.o 00:02:56.743 CC lib/env_dpdk/env.o 00:02:56.743 CC lib/rdma_provider/common.o 00:02:56.743 CC lib/idxd/idxd.o 00:02:56.743 CC lib/env_dpdk/memory.o 00:02:56.743 CC lib/vmd/led.o 00:02:56.743 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.743 CC lib/env_dpdk/pci.o 00:02:56.743 CC lib/env_dpdk/init.o 00:02:56.743 CC lib/idxd/idxd_user.o 00:02:56.743 CC lib/idxd/idxd_kernel.o 00:02:56.743 CC lib/env_dpdk/threads.o 00:02:56.743 CC lib/env_dpdk/pci_ioat.o 00:02:56.743 CC lib/env_dpdk/pci_virtio.o 00:02:56.743 CC lib/env_dpdk/pci_vmd.o 00:02:56.743 CC lib/env_dpdk/pci_idxd.o 00:02:56.743 CC lib/env_dpdk/pci_event.o 00:02:56.743 CC lib/env_dpdk/sigbus_handler.o 00:02:56.743 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.743 CC lib/env_dpdk/pci_dpdk.o 00:02:56.743 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:56.743 LIB libspdk_trace_parser.a 00:02:56.743 SO libspdk_trace_parser.so.5.0 00:02:57.001 SYMLINK libspdk_trace_parser.so 00:02:57.001 LIB libspdk_rdma_provider.a 00:02:57.001 SO libspdk_rdma_provider.so.6.0 00:02:57.001 LIB libspdk_conf.a 00:02:57.001 SYMLINK libspdk_rdma_provider.so 00:02:57.001 LIB libspdk_rdma_utils.a 00:02:57.001 LIB libspdk_json.a 00:02:57.001 SO libspdk_conf.so.6.0 00:02:57.001 SO libspdk_rdma_utils.so.1.0 00:02:57.001 SO libspdk_json.so.6.0 00:02:57.001 SYMLINK libspdk_conf.so 00:02:57.001 SYMLINK libspdk_rdma_utils.so 00:02:57.001 SYMLINK libspdk_json.so 00:02:57.259 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.259 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.259 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.259 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.259 LIB libspdk_idxd.a 00:02:57.259 SO libspdk_idxd.so.12.0 00:02:57.259 SYMLINK libspdk_idxd.so 00:02:57.516 LIB libspdk_vmd.a 00:02:57.516 SO libspdk_vmd.so.6.0 00:02:57.516 SYMLINK libspdk_vmd.so 00:02:57.516 LIB libspdk_jsonrpc.a 00:02:57.516 SO libspdk_jsonrpc.so.6.0 00:02:57.516 SYMLINK libspdk_jsonrpc.so 00:02:57.773 CC lib/rpc/rpc.o 00:02:58.031 LIB libspdk_rpc.a 00:02:58.031 SO libspdk_rpc.so.6.0 00:02:58.031 SYMLINK libspdk_rpc.so 00:02:58.289 CC lib/notify/notify.o 00:02:58.289 CC lib/trace/trace.o 00:02:58.289 CC lib/notify/notify_rpc.o 00:02:58.289 CC lib/trace/trace_flags.o 00:02:58.289 CC lib/keyring/keyring.o 00:02:58.289 CC lib/trace/trace_rpc.o 00:02:58.289 CC lib/keyring/keyring_rpc.o 00:02:58.547 LIB libspdk_notify.a 00:02:58.547 SO libspdk_notify.so.6.0 00:02:58.547 LIB libspdk_keyring.a 00:02:58.547 LIB libspdk_trace.a 00:02:58.547 SYMLINK libspdk_notify.so 00:02:58.547 SO libspdk_keyring.so.1.0 00:02:58.547 SO libspdk_trace.so.10.0 00:02:58.547 SYMLINK libspdk_keyring.so 00:02:58.547 SYMLINK libspdk_trace.so 00:02:58.806 LIB libspdk_env_dpdk.a 00:02:58.806 SO libspdk_env_dpdk.so.15.0 00:02:58.806 CC lib/sock/sock.o 00:02:58.806 CC lib/thread/thread.o 00:02:58.806 CC lib/sock/sock_rpc.o 00:02:58.806 CC lib/thread/iobuf.o 00:02:58.806 SYMLINK libspdk_env_dpdk.so 00:02:59.064 LIB libspdk_sock.a 00:02:59.064 SO libspdk_sock.so.10.0 00:02:59.322 SYMLINK libspdk_sock.so 00:02:59.322 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.322 CC lib/nvme/nvme_ctrlr.o 00:02:59.322 CC lib/nvme/nvme_fabric.o 00:02:59.322 CC lib/nvme/nvme_ns_cmd.o 00:02:59.322 CC lib/nvme/nvme_ns.o 00:02:59.322 CC lib/nvme/nvme_pcie_common.o 00:02:59.322 CC lib/nvme/nvme_pcie.o 00:02:59.322 CC lib/nvme/nvme_qpair.o 00:02:59.322 CC lib/nvme/nvme.o 00:02:59.322 CC lib/nvme/nvme_quirks.o 00:02:59.322 CC lib/nvme/nvme_transport.o 00:02:59.322 CC lib/nvme/nvme_discovery.o 00:02:59.322 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.322 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.322 CC lib/nvme/nvme_tcp.o 00:02:59.322 CC lib/nvme/nvme_opal.o 00:02:59.322 CC lib/nvme/nvme_io_msg.o 00:02:59.322 CC lib/nvme/nvme_poll_group.o 00:02:59.322 CC lib/nvme/nvme_zns.o 00:02:59.322 CC lib/nvme/nvme_stubs.o 00:02:59.322 CC lib/nvme/nvme_auth.o 00:02:59.322 CC lib/nvme/nvme_cuse.o 00:02:59.322 CC lib/nvme/nvme_vfio_user.o 00:02:59.322 CC lib/nvme/nvme_rdma.o 00:03:00.258 LIB libspdk_thread.a 00:03:00.258 SO libspdk_thread.so.10.1 00:03:00.516 SYMLINK libspdk_thread.so 00:03:00.516 CC lib/accel/accel.o 00:03:00.516 CC lib/blob/blobstore.o 00:03:00.516 CC lib/init/json_config.o 00:03:00.516 CC lib/virtio/virtio.o 00:03:00.516 CC lib/virtio/virtio_vhost_user.o 00:03:00.516 CC lib/vfu_tgt/tgt_endpoint.o 00:03:00.516 CC lib/blob/request.o 00:03:00.516 CC lib/init/subsystem.o 00:03:00.516 CC lib/accel/accel_rpc.o 00:03:00.516 CC lib/virtio/virtio_vfio_user.o 00:03:00.516 CC lib/blob/zeroes.o 00:03:00.516 CC lib/vfu_tgt/tgt_rpc.o 00:03:00.516 CC lib/blob/blob_bs_dev.o 00:03:00.516 CC lib/init/rpc.o 00:03:00.516 CC lib/init/subsystem_rpc.o 00:03:00.516 CC lib/accel/accel_sw.o 00:03:00.516 CC lib/virtio/virtio_pci.o 00:03:00.775 LIB libspdk_init.a 00:03:01.033 SO libspdk_init.so.5.0 00:03:01.033 LIB libspdk_vfu_tgt.a 00:03:01.033 LIB libspdk_virtio.a 00:03:01.033 SYMLINK libspdk_init.so 00:03:01.033 SO libspdk_vfu_tgt.so.3.0 00:03:01.033 SO libspdk_virtio.so.7.0 00:03:01.033 SYMLINK libspdk_vfu_tgt.so 00:03:01.033 SYMLINK libspdk_virtio.so 00:03:01.033 CC lib/event/app.o 00:03:01.033 CC lib/event/reactor.o 00:03:01.033 CC lib/event/log_rpc.o 00:03:01.033 CC lib/event/app_rpc.o 00:03:01.033 CC lib/event/scheduler_static.o 00:03:01.600 LIB libspdk_event.a 00:03:01.600 SO libspdk_event.so.14.0 00:03:01.600 LIB libspdk_accel.a 00:03:01.600 SYMLINK libspdk_event.so 00:03:01.600 SO libspdk_accel.so.16.0 00:03:01.857 LIB libspdk_nvme.a 00:03:01.857 SYMLINK libspdk_accel.so 00:03:01.857 SO libspdk_nvme.so.13.1 00:03:01.857 CC lib/bdev/bdev.o 00:03:01.857 CC lib/bdev/bdev_rpc.o 00:03:01.857 CC lib/bdev/bdev_zone.o 00:03:01.857 CC lib/bdev/part.o 00:03:01.857 CC lib/bdev/scsi_nvme.o 00:03:02.115 SYMLINK libspdk_nvme.so 00:03:03.489 LIB libspdk_blob.a 00:03:03.747 SO libspdk_blob.so.11.0 00:03:03.747 SYMLINK libspdk_blob.so 00:03:04.005 CC lib/blobfs/blobfs.o 00:03:04.005 CC lib/blobfs/tree.o 00:03:04.005 CC lib/lvol/lvol.o 00:03:04.571 LIB libspdk_bdev.a 00:03:04.571 SO libspdk_bdev.so.16.0 00:03:04.571 SYMLINK libspdk_bdev.so 00:03:04.571 LIB libspdk_blobfs.a 00:03:04.571 SO libspdk_blobfs.so.10.0 00:03:04.884 SYMLINK libspdk_blobfs.so 00:03:04.884 LIB libspdk_lvol.a 00:03:04.884 SO libspdk_lvol.so.10.0 00:03:04.884 CC lib/ublk/ublk.o 00:03:04.884 CC lib/ftl/ftl_core.o 00:03:04.884 CC lib/nvmf/ctrlr.o 00:03:04.884 CC lib/ublk/ublk_rpc.o 00:03:04.884 CC lib/scsi/dev.o 00:03:04.884 CC lib/ftl/ftl_init.o 00:03:04.884 CC lib/nvmf/ctrlr_discovery.o 00:03:04.884 CC lib/nbd/nbd.o 00:03:04.884 CC lib/ftl/ftl_layout.o 00:03:04.884 CC lib/scsi/lun.o 00:03:04.884 CC lib/nvmf/ctrlr_bdev.o 00:03:04.884 CC lib/nbd/nbd_rpc.o 00:03:04.884 CC lib/scsi/port.o 00:03:04.884 CC lib/nvmf/subsystem.o 00:03:04.884 CC lib/ftl/ftl_debug.o 00:03:04.884 CC lib/scsi/scsi.o 00:03:04.884 CC lib/ftl/ftl_io.o 00:03:04.884 CC lib/scsi/scsi_bdev.o 00:03:04.884 CC lib/nvmf/nvmf_rpc.o 00:03:04.884 CC lib/ftl/ftl_sb.o 00:03:04.884 CC lib/nvmf/nvmf.o 00:03:04.884 CC lib/scsi/scsi_pr.o 00:03:04.884 CC lib/nvmf/transport.o 00:03:04.884 CC lib/ftl/ftl_l2p.o 00:03:04.884 CC lib/ftl/ftl_l2p_flat.o 00:03:04.884 CC lib/scsi/task.o 00:03:04.884 CC lib/scsi/scsi_rpc.o 00:03:04.884 CC lib/nvmf/tcp.o 00:03:04.884 CC lib/ftl/ftl_nv_cache.o 00:03:04.884 CC lib/nvmf/stubs.o 00:03:04.884 CC lib/nvmf/mdns_server.o 00:03:04.884 CC lib/ftl/ftl_band.o 00:03:04.884 CC lib/nvmf/vfio_user.o 00:03:04.884 CC lib/ftl/ftl_band_ops.o 00:03:04.884 CC lib/nvmf/rdma.o 00:03:04.884 CC lib/ftl/ftl_writer.o 00:03:04.884 CC lib/ftl/ftl_rq.o 00:03:04.884 CC lib/nvmf/auth.o 00:03:04.884 CC lib/ftl/ftl_reloc.o 00:03:04.884 CC lib/ftl/ftl_l2p_cache.o 00:03:04.884 CC lib/ftl/ftl_p2l.o 00:03:04.884 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.884 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.884 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.884 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.884 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.884 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.884 SYMLINK libspdk_lvol.so 00:03:04.884 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.143 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.143 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.143 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.143 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.143 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.143 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.143 CC lib/ftl/utils/ftl_conf.o 00:03:05.143 CC lib/ftl/utils/ftl_md.o 00:03:05.143 CC lib/ftl/utils/ftl_mempool.o 00:03:05.405 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.405 CC lib/ftl/utils/ftl_property.o 00:03:05.405 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.405 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.405 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.405 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.405 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.405 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.405 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:05.405 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:05.405 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:05.405 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:05.405 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:05.405 CC lib/ftl/base/ftl_base_dev.o 00:03:05.405 CC lib/ftl/base/ftl_base_bdev.o 00:03:05.405 CC lib/ftl/ftl_trace.o 00:03:05.663 LIB libspdk_nbd.a 00:03:05.663 SO libspdk_nbd.so.7.0 00:03:05.663 SYMLINK libspdk_nbd.so 00:03:05.663 LIB libspdk_scsi.a 00:03:05.920 SO libspdk_scsi.so.9.0 00:03:05.920 SYMLINK libspdk_scsi.so 00:03:05.920 LIB libspdk_ublk.a 00:03:05.920 SO libspdk_ublk.so.3.0 00:03:05.920 SYMLINK libspdk_ublk.so 00:03:06.178 CC lib/vhost/vhost.o 00:03:06.178 CC lib/iscsi/conn.o 00:03:06.178 CC lib/vhost/vhost_rpc.o 00:03:06.178 CC lib/iscsi/init_grp.o 00:03:06.178 CC lib/vhost/vhost_scsi.o 00:03:06.178 CC lib/iscsi/iscsi.o 00:03:06.178 CC lib/vhost/vhost_blk.o 00:03:06.178 CC lib/iscsi/md5.o 00:03:06.178 CC lib/vhost/rte_vhost_user.o 00:03:06.178 CC lib/iscsi/param.o 00:03:06.178 CC lib/iscsi/portal_grp.o 00:03:06.178 CC lib/iscsi/tgt_node.o 00:03:06.178 CC lib/iscsi/iscsi_subsystem.o 00:03:06.178 CC lib/iscsi/iscsi_rpc.o 00:03:06.178 CC lib/iscsi/task.o 00:03:06.178 LIB libspdk_ftl.a 00:03:06.435 SO libspdk_ftl.so.9.0 00:03:06.710 SYMLINK libspdk_ftl.so 00:03:07.307 LIB libspdk_vhost.a 00:03:07.307 SO libspdk_vhost.so.8.0 00:03:07.307 LIB libspdk_nvmf.a 00:03:07.307 SYMLINK libspdk_vhost.so 00:03:07.578 SO libspdk_nvmf.so.19.0 00:03:07.578 LIB libspdk_iscsi.a 00:03:07.578 SO libspdk_iscsi.so.8.0 00:03:07.578 SYMLINK libspdk_nvmf.so 00:03:07.578 SYMLINK libspdk_iscsi.so 00:03:07.857 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.857 CC module/vfu_device/vfu_virtio.o 00:03:07.857 CC module/vfu_device/vfu_virtio_blk.o 00:03:07.857 CC module/vfu_device/vfu_virtio_scsi.o 00:03:07.857 CC module/vfu_device/vfu_virtio_rpc.o 00:03:08.141 CC module/keyring/linux/keyring.o 00:03:08.141 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.141 CC module/accel/error/accel_error.o 00:03:08.141 CC module/keyring/linux/keyring_rpc.o 00:03:08.141 CC module/accel/error/accel_error_rpc.o 00:03:08.141 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.141 CC module/accel/ioat/accel_ioat.o 00:03:08.141 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.141 CC module/accel/dsa/accel_dsa.o 00:03:08.141 CC module/blob/bdev/blob_bdev.o 00:03:08.141 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.141 CC module/keyring/file/keyring.o 00:03:08.141 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.141 CC module/keyring/file/keyring_rpc.o 00:03:08.141 CC module/accel/iaa/accel_iaa.o 00:03:08.141 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.141 CC module/sock/posix/posix.o 00:03:08.141 LIB libspdk_env_dpdk_rpc.a 00:03:08.141 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.141 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.141 LIB libspdk_keyring_linux.a 00:03:08.141 LIB libspdk_keyring_file.a 00:03:08.141 LIB libspdk_scheduler_gscheduler.a 00:03:08.141 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.141 SO libspdk_keyring_linux.so.1.0 00:03:08.141 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.141 SO libspdk_keyring_file.so.1.0 00:03:08.415 LIB libspdk_accel_error.a 00:03:08.415 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.415 LIB libspdk_accel_ioat.a 00:03:08.415 LIB libspdk_scheduler_dynamic.a 00:03:08.415 SO libspdk_accel_error.so.2.0 00:03:08.415 LIB libspdk_accel_iaa.a 00:03:08.415 SO libspdk_accel_ioat.so.6.0 00:03:08.415 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.415 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.415 SYMLINK libspdk_keyring_linux.so 00:03:08.415 SYMLINK libspdk_keyring_file.so 00:03:08.415 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.415 SO libspdk_accel_iaa.so.3.0 00:03:08.415 SYMLINK libspdk_accel_error.so 00:03:08.415 LIB libspdk_accel_dsa.a 00:03:08.415 LIB libspdk_blob_bdev.a 00:03:08.415 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.415 SYMLINK libspdk_accel_ioat.so 00:03:08.415 SO libspdk_blob_bdev.so.11.0 00:03:08.415 SO libspdk_accel_dsa.so.5.0 00:03:08.415 SYMLINK libspdk_accel_iaa.so 00:03:08.415 SYMLINK libspdk_blob_bdev.so 00:03:08.415 SYMLINK libspdk_accel_dsa.so 00:03:08.703 LIB libspdk_vfu_device.a 00:03:08.703 CC module/bdev/error/vbdev_error.o 00:03:08.703 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.703 CC module/bdev/gpt/gpt.o 00:03:08.703 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.703 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.703 CC module/bdev/split/vbdev_split.o 00:03:08.703 CC module/bdev/raid/bdev_raid.o 00:03:08.703 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.703 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.703 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.703 CC module/bdev/null/bdev_null.o 00:03:08.703 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.703 CC module/bdev/delay/vbdev_delay.o 00:03:08.703 CC module/bdev/malloc/bdev_malloc.o 00:03:08.703 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.703 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.703 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.703 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.703 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.703 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.703 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.703 CC module/bdev/raid/raid0.o 00:03:08.703 CC module/bdev/null/bdev_null_rpc.o 00:03:08.703 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.703 CC module/bdev/raid/raid1.o 00:03:08.703 CC module/bdev/raid/concat.o 00:03:08.703 CC module/bdev/nvme/bdev_nvme.o 00:03:08.703 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.703 CC module/bdev/nvme/nvme_rpc.o 00:03:08.703 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.703 SO libspdk_vfu_device.so.3.0 00:03:08.703 CC module/bdev/ftl/bdev_ftl.o 00:03:08.703 CC module/bdev/nvme/vbdev_opal.o 00:03:08.703 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.703 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.703 CC module/bdev/aio/bdev_aio.o 00:03:08.703 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.703 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.703 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.703 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.703 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.703 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.703 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.703 SYMLINK libspdk_vfu_device.so 00:03:08.973 LIB libspdk_sock_posix.a 00:03:08.974 SO libspdk_sock_posix.so.6.0 00:03:08.974 LIB libspdk_blobfs_bdev.a 00:03:08.974 SO libspdk_blobfs_bdev.so.6.0 00:03:08.974 SYMLINK libspdk_sock_posix.so 00:03:08.974 LIB libspdk_bdev_passthru.a 00:03:08.974 LIB libspdk_bdev_split.a 00:03:09.269 SO libspdk_bdev_split.so.6.0 00:03:09.269 SO libspdk_bdev_passthru.so.6.0 00:03:09.269 LIB libspdk_bdev_error.a 00:03:09.269 LIB libspdk_bdev_ftl.a 00:03:09.269 LIB libspdk_bdev_gpt.a 00:03:09.269 SYMLINK libspdk_blobfs_bdev.so 00:03:09.269 SO libspdk_bdev_ftl.so.6.0 00:03:09.269 SO libspdk_bdev_error.so.6.0 00:03:09.269 SO libspdk_bdev_gpt.so.6.0 00:03:09.269 LIB libspdk_bdev_null.a 00:03:09.269 SYMLINK libspdk_bdev_split.so 00:03:09.269 SYMLINK libspdk_bdev_passthru.so 00:03:09.269 SO libspdk_bdev_null.so.6.0 00:03:09.269 SYMLINK libspdk_bdev_gpt.so 00:03:09.269 SYMLINK libspdk_bdev_error.so 00:03:09.269 SYMLINK libspdk_bdev_ftl.so 00:03:09.269 LIB libspdk_bdev_zone_block.a 00:03:09.269 LIB libspdk_bdev_malloc.a 00:03:09.269 SO libspdk_bdev_zone_block.so.6.0 00:03:09.269 SO libspdk_bdev_malloc.so.6.0 00:03:09.269 SYMLINK libspdk_bdev_null.so 00:03:09.269 LIB libspdk_bdev_iscsi.a 00:03:09.269 LIB libspdk_bdev_aio.a 00:03:09.269 SO libspdk_bdev_iscsi.so.6.0 00:03:09.269 SYMLINK libspdk_bdev_zone_block.so 00:03:09.269 SO libspdk_bdev_aio.so.6.0 00:03:09.269 SYMLINK libspdk_bdev_malloc.so 00:03:09.269 LIB libspdk_bdev_delay.a 00:03:09.269 SYMLINK libspdk_bdev_iscsi.so 00:03:09.269 SO libspdk_bdev_delay.so.6.0 00:03:09.269 SYMLINK libspdk_bdev_aio.so 00:03:09.541 LIB libspdk_bdev_lvol.a 00:03:09.541 SO libspdk_bdev_lvol.so.6.0 00:03:09.541 SYMLINK libspdk_bdev_delay.so 00:03:09.541 SYMLINK libspdk_bdev_lvol.so 00:03:09.541 LIB libspdk_bdev_virtio.a 00:03:09.541 SO libspdk_bdev_virtio.so.6.0 00:03:09.541 SYMLINK libspdk_bdev_virtio.so 00:03:10.192 LIB libspdk_bdev_raid.a 00:03:10.192 SO libspdk_bdev_raid.so.6.0 00:03:10.192 SYMLINK libspdk_bdev_raid.so 00:03:11.191 LIB libspdk_bdev_nvme.a 00:03:11.191 SO libspdk_bdev_nvme.so.7.0 00:03:11.191 SYMLINK libspdk_bdev_nvme.so 00:03:11.479 CC module/event/subsystems/sock/sock.o 00:03:11.479 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.479 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.479 CC module/event/subsystems/vmd/vmd.o 00:03:11.479 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.479 CC module/event/subsystems/keyring/keyring.o 00:03:11.479 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:11.479 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.479 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.759 LIB libspdk_event_keyring.a 00:03:11.759 LIB libspdk_event_vhost_blk.a 00:03:11.759 LIB libspdk_event_vfu_tgt.a 00:03:11.759 LIB libspdk_event_scheduler.a 00:03:11.759 LIB libspdk_event_sock.a 00:03:11.759 LIB libspdk_event_vmd.a 00:03:11.759 LIB libspdk_event_iobuf.a 00:03:11.759 SO libspdk_event_vhost_blk.so.3.0 00:03:11.759 SO libspdk_event_keyring.so.1.0 00:03:11.759 SO libspdk_event_vfu_tgt.so.3.0 00:03:11.759 SO libspdk_event_scheduler.so.4.0 00:03:11.759 SO libspdk_event_sock.so.5.0 00:03:11.759 SO libspdk_event_vmd.so.6.0 00:03:11.759 SO libspdk_event_iobuf.so.3.0 00:03:11.759 SYMLINK libspdk_event_vhost_blk.so 00:03:11.759 SYMLINK libspdk_event_keyring.so 00:03:11.759 SYMLINK libspdk_event_vfu_tgt.so 00:03:11.759 SYMLINK libspdk_event_scheduler.so 00:03:11.759 SYMLINK libspdk_event_sock.so 00:03:11.759 SYMLINK libspdk_event_vmd.so 00:03:11.759 SYMLINK libspdk_event_iobuf.so 00:03:12.030 CC module/event/subsystems/accel/accel.o 00:03:12.030 LIB libspdk_event_accel.a 00:03:12.030 SO libspdk_event_accel.so.6.0 00:03:12.030 SYMLINK libspdk_event_accel.so 00:03:12.293 CC module/event/subsystems/bdev/bdev.o 00:03:12.551 LIB libspdk_event_bdev.a 00:03:12.551 SO libspdk_event_bdev.so.6.0 00:03:12.551 SYMLINK libspdk_event_bdev.so 00:03:12.809 CC module/event/subsystems/nbd/nbd.o 00:03:12.809 CC module/event/subsystems/scsi/scsi.o 00:03:12.809 CC module/event/subsystems/ublk/ublk.o 00:03:12.809 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.809 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.809 LIB libspdk_event_nbd.a 00:03:12.809 LIB libspdk_event_ublk.a 00:03:12.809 LIB libspdk_event_scsi.a 00:03:12.809 SO libspdk_event_ublk.so.3.0 00:03:12.809 SO libspdk_event_nbd.so.6.0 00:03:12.809 SO libspdk_event_scsi.so.6.0 00:03:12.809 SYMLINK libspdk_event_ublk.so 00:03:12.809 SYMLINK libspdk_event_nbd.so 00:03:13.066 SYMLINK libspdk_event_scsi.so 00:03:13.066 LIB libspdk_event_nvmf.a 00:03:13.066 SO libspdk_event_nvmf.so.6.0 00:03:13.066 SYMLINK libspdk_event_nvmf.so 00:03:13.066 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.066 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.324 LIB libspdk_event_vhost_scsi.a 00:03:13.324 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.324 LIB libspdk_event_iscsi.a 00:03:13.324 SO libspdk_event_iscsi.so.6.0 00:03:13.324 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.324 SYMLINK libspdk_event_iscsi.so 00:03:13.582 SO libspdk.so.6.0 00:03:13.582 SYMLINK libspdk.so 00:03:13.582 CXX app/trace/trace.o 00:03:13.582 CC app/spdk_nvme_identify/identify.o 00:03:13.582 CC app/trace_record/trace_record.o 00:03:13.582 CC test/rpc_client/rpc_client_test.o 00:03:13.582 CC app/spdk_lspci/spdk_lspci.o 00:03:13.582 CC app/spdk_top/spdk_top.o 00:03:13.582 TEST_HEADER include/spdk/accel.h 00:03:13.582 TEST_HEADER include/spdk/accel_module.h 00:03:13.582 CC app/spdk_nvme_perf/perf.o 00:03:13.582 TEST_HEADER include/spdk/assert.h 00:03:13.582 TEST_HEADER include/spdk/barrier.h 00:03:13.582 TEST_HEADER include/spdk/base64.h 00:03:13.582 TEST_HEADER include/spdk/bdev.h 00:03:13.582 CC app/spdk_nvme_discover/discovery_aer.o 00:03:13.583 TEST_HEADER include/spdk/bdev_module.h 00:03:13.583 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.583 TEST_HEADER include/spdk/bit_array.h 00:03:13.583 TEST_HEADER include/spdk/bit_pool.h 00:03:13.583 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.583 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.583 TEST_HEADER include/spdk/blobfs.h 00:03:13.583 TEST_HEADER include/spdk/blob.h 00:03:13.583 TEST_HEADER include/spdk/conf.h 00:03:13.583 TEST_HEADER include/spdk/config.h 00:03:13.583 TEST_HEADER include/spdk/cpuset.h 00:03:13.583 TEST_HEADER include/spdk/crc16.h 00:03:13.583 TEST_HEADER include/spdk/crc32.h 00:03:13.583 TEST_HEADER include/spdk/crc64.h 00:03:13.583 TEST_HEADER include/spdk/dif.h 00:03:13.583 TEST_HEADER include/spdk/dma.h 00:03:13.583 TEST_HEADER include/spdk/endian.h 00:03:13.583 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.583 TEST_HEADER include/spdk/env.h 00:03:13.583 TEST_HEADER include/spdk/event.h 00:03:13.583 TEST_HEADER include/spdk/fd_group.h 00:03:13.583 TEST_HEADER include/spdk/fd.h 00:03:13.583 TEST_HEADER include/spdk/ftl.h 00:03:13.583 TEST_HEADER include/spdk/file.h 00:03:13.583 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.583 TEST_HEADER include/spdk/hexlify.h 00:03:13.583 TEST_HEADER include/spdk/histogram_data.h 00:03:13.583 TEST_HEADER include/spdk/idxd.h 00:03:13.583 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.583 TEST_HEADER include/spdk/init.h 00:03:13.583 TEST_HEADER include/spdk/ioat.h 00:03:13.583 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.583 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.583 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.583 TEST_HEADER include/spdk/json.h 00:03:13.583 TEST_HEADER include/spdk/keyring.h 00:03:13.583 TEST_HEADER include/spdk/keyring_module.h 00:03:13.583 TEST_HEADER include/spdk/likely.h 00:03:13.583 TEST_HEADER include/spdk/log.h 00:03:13.583 TEST_HEADER include/spdk/lvol.h 00:03:13.583 TEST_HEADER include/spdk/memory.h 00:03:13.583 TEST_HEADER include/spdk/mmio.h 00:03:13.583 TEST_HEADER include/spdk/nbd.h 00:03:13.583 TEST_HEADER include/spdk/notify.h 00:03:13.583 TEST_HEADER include/spdk/net.h 00:03:13.583 TEST_HEADER include/spdk/nvme.h 00:03:13.583 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.583 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.583 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.583 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.583 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.583 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.583 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.583 TEST_HEADER include/spdk/nvmf.h 00:03:13.583 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.583 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.583 TEST_HEADER include/spdk/opal.h 00:03:13.583 TEST_HEADER include/spdk/opal_spec.h 00:03:13.583 TEST_HEADER include/spdk/pci_ids.h 00:03:13.583 TEST_HEADER include/spdk/queue.h 00:03:13.583 TEST_HEADER include/spdk/pipe.h 00:03:13.583 TEST_HEADER include/spdk/reduce.h 00:03:13.583 TEST_HEADER include/spdk/rpc.h 00:03:13.583 TEST_HEADER include/spdk/scheduler.h 00:03:13.583 TEST_HEADER include/spdk/scsi.h 00:03:13.583 TEST_HEADER include/spdk/sock.h 00:03:13.583 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.583 TEST_HEADER include/spdk/string.h 00:03:13.583 TEST_HEADER include/spdk/stdinc.h 00:03:13.583 TEST_HEADER include/spdk/thread.h 00:03:13.583 TEST_HEADER include/spdk/trace.h 00:03:13.583 TEST_HEADER include/spdk/trace_parser.h 00:03:13.583 TEST_HEADER include/spdk/tree.h 00:03:13.583 TEST_HEADER include/spdk/ublk.h 00:03:13.583 TEST_HEADER include/spdk/util.h 00:03:13.583 TEST_HEADER include/spdk/uuid.h 00:03:13.583 TEST_HEADER include/spdk/version.h 00:03:13.583 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.583 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.583 TEST_HEADER include/spdk/vhost.h 00:03:13.583 TEST_HEADER include/spdk/vmd.h 00:03:13.583 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:13.583 TEST_HEADER include/spdk/xor.h 00:03:13.583 TEST_HEADER include/spdk/zipf.h 00:03:13.583 CXX test/cpp_headers/accel.o 00:03:13.583 CXX test/cpp_headers/accel_module.o 00:03:13.583 CXX test/cpp_headers/assert.o 00:03:13.850 CXX test/cpp_headers/barrier.o 00:03:13.850 CXX test/cpp_headers/base64.o 00:03:13.850 CXX test/cpp_headers/bdev.o 00:03:13.850 CC app/spdk_dd/spdk_dd.o 00:03:13.850 CXX test/cpp_headers/bdev_module.o 00:03:13.850 CXX test/cpp_headers/bdev_zone.o 00:03:13.850 CXX test/cpp_headers/bit_array.o 00:03:13.850 CXX test/cpp_headers/bit_pool.o 00:03:13.850 CXX test/cpp_headers/blob_bdev.o 00:03:13.850 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.850 CXX test/cpp_headers/blobfs.o 00:03:13.850 CXX test/cpp_headers/blob.o 00:03:13.850 CXX test/cpp_headers/conf.o 00:03:13.850 CC app/nvmf_tgt/nvmf_main.o 00:03:13.850 CXX test/cpp_headers/config.o 00:03:13.850 CXX test/cpp_headers/cpuset.o 00:03:13.850 CXX test/cpp_headers/crc16.o 00:03:13.850 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.850 CXX test/cpp_headers/crc32.o 00:03:13.850 CC test/app/jsoncat/jsoncat.o 00:03:13.850 CC test/thread/poller_perf/poller_perf.o 00:03:13.850 CC test/app/histogram_perf/histogram_perf.o 00:03:13.850 CC examples/ioat/perf/perf.o 00:03:13.850 CC examples/util/zipf/zipf.o 00:03:13.850 CC test/env/vtophys/vtophys.o 00:03:13.850 CC app/spdk_tgt/spdk_tgt.o 00:03:13.850 CC test/env/memory/memory_ut.o 00:03:13.850 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.850 CC test/env/pci/pci_ut.o 00:03:13.850 CC test/app/stub/stub.o 00:03:13.850 CC examples/ioat/verify/verify.o 00:03:13.850 CC app/fio/nvme/fio_plugin.o 00:03:13.850 CC test/dma/test_dma/test_dma.o 00:03:13.850 CC test/app/bdev_svc/bdev_svc.o 00:03:13.850 CC app/fio/bdev/fio_plugin.o 00:03:13.850 CC test/env/mem_callbacks/mem_callbacks.o 00:03:13.850 LINK spdk_lspci 00:03:14.113 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.113 LINK rpc_client_test 00:03:14.113 LINK spdk_nvme_discover 00:03:14.113 LINK jsoncat 00:03:14.113 LINK vtophys 00:03:14.113 LINK poller_perf 00:03:14.113 CXX test/cpp_headers/crc64.o 00:03:14.113 LINK interrupt_tgt 00:03:14.113 LINK histogram_perf 00:03:14.113 LINK nvmf_tgt 00:03:14.113 LINK zipf 00:03:14.113 CXX test/cpp_headers/dif.o 00:03:14.113 CXX test/cpp_headers/dma.o 00:03:14.113 CXX test/cpp_headers/endian.o 00:03:14.113 CXX test/cpp_headers/env_dpdk.o 00:03:14.113 CXX test/cpp_headers/env.o 00:03:14.113 CXX test/cpp_headers/event.o 00:03:14.113 CXX test/cpp_headers/fd_group.o 00:03:14.113 CXX test/cpp_headers/fd.o 00:03:14.113 LINK env_dpdk_post_init 00:03:14.113 CXX test/cpp_headers/file.o 00:03:14.113 CXX test/cpp_headers/ftl.o 00:03:14.113 CXX test/cpp_headers/gpt_spec.o 00:03:14.113 LINK stub 00:03:14.113 LINK spdk_trace_record 00:03:14.113 CXX test/cpp_headers/hexlify.o 00:03:14.113 LINK iscsi_tgt 00:03:14.113 LINK ioat_perf 00:03:14.374 CXX test/cpp_headers/histogram_data.o 00:03:14.374 CXX test/cpp_headers/idxd.o 00:03:14.374 LINK verify 00:03:14.374 CXX test/cpp_headers/idxd_spec.o 00:03:14.374 LINK spdk_tgt 00:03:14.374 LINK bdev_svc 00:03:14.374 CXX test/cpp_headers/init.o 00:03:14.374 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.374 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.374 CXX test/cpp_headers/ioat.o 00:03:14.374 CXX test/cpp_headers/ioat_spec.o 00:03:14.374 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.374 CXX test/cpp_headers/iscsi_spec.o 00:03:14.374 CXX test/cpp_headers/json.o 00:03:14.374 CXX test/cpp_headers/jsonrpc.o 00:03:14.374 LINK spdk_dd 00:03:14.374 CXX test/cpp_headers/keyring.o 00:03:14.374 CXX test/cpp_headers/keyring_module.o 00:03:14.639 CXX test/cpp_headers/likely.o 00:03:14.639 CXX test/cpp_headers/log.o 00:03:14.639 CXX test/cpp_headers/lvol.o 00:03:14.639 LINK spdk_trace 00:03:14.639 CXX test/cpp_headers/memory.o 00:03:14.639 LINK pci_ut 00:03:14.639 CXX test/cpp_headers/mmio.o 00:03:14.639 CXX test/cpp_headers/nbd.o 00:03:14.639 CXX test/cpp_headers/net.o 00:03:14.639 CXX test/cpp_headers/notify.o 00:03:14.639 CXX test/cpp_headers/nvme.o 00:03:14.639 CXX test/cpp_headers/nvme_intel.o 00:03:14.639 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.639 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.639 CXX test/cpp_headers/nvme_spec.o 00:03:14.639 CXX test/cpp_headers/nvme_zns.o 00:03:14.640 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.640 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.640 CXX test/cpp_headers/nvmf.o 00:03:14.640 LINK test_dma 00:03:14.640 CXX test/cpp_headers/nvmf_spec.o 00:03:14.640 CXX test/cpp_headers/nvmf_transport.o 00:03:14.640 CXX test/cpp_headers/opal.o 00:03:14.640 CXX test/cpp_headers/opal_spec.o 00:03:14.640 CXX test/cpp_headers/pci_ids.o 00:03:14.901 CXX test/cpp_headers/pipe.o 00:03:14.901 LINK nvme_fuzz 00:03:14.901 CC test/event/event_perf/event_perf.o 00:03:14.901 CC test/event/reactor/reactor.o 00:03:14.901 CXX test/cpp_headers/queue.o 00:03:14.901 CC test/event/reactor_perf/reactor_perf.o 00:03:14.901 CXX test/cpp_headers/reduce.o 00:03:14.901 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.901 CXX test/cpp_headers/rpc.o 00:03:14.901 CC examples/sock/hello_world/hello_sock.o 00:03:14.901 CC examples/idxd/perf/perf.o 00:03:14.901 LINK spdk_nvme 00:03:14.901 CC examples/vmd/led/led.o 00:03:14.901 CXX test/cpp_headers/scheduler.o 00:03:14.901 CC examples/thread/thread/thread_ex.o 00:03:14.901 LINK spdk_bdev 00:03:14.901 CXX test/cpp_headers/scsi.o 00:03:14.901 CXX test/cpp_headers/scsi_spec.o 00:03:14.901 CXX test/cpp_headers/sock.o 00:03:14.901 CXX test/cpp_headers/stdinc.o 00:03:14.901 CXX test/cpp_headers/string.o 00:03:14.901 CC test/event/app_repeat/app_repeat.o 00:03:14.901 CXX test/cpp_headers/thread.o 00:03:14.901 CXX test/cpp_headers/trace.o 00:03:14.901 CXX test/cpp_headers/trace_parser.o 00:03:15.163 CXX test/cpp_headers/tree.o 00:03:15.163 CXX test/cpp_headers/ublk.o 00:03:15.163 CXX test/cpp_headers/util.o 00:03:15.163 CXX test/cpp_headers/uuid.o 00:03:15.163 CXX test/cpp_headers/version.o 00:03:15.163 CXX test/cpp_headers/vfio_user_pci.o 00:03:15.163 CXX test/cpp_headers/vfio_user_spec.o 00:03:15.163 CXX test/cpp_headers/vhost.o 00:03:15.163 CC test/event/scheduler/scheduler.o 00:03:15.163 CXX test/cpp_headers/xor.o 00:03:15.163 CXX test/cpp_headers/vmd.o 00:03:15.163 CXX test/cpp_headers/zipf.o 00:03:15.163 LINK event_perf 00:03:15.163 LINK lsvmd 00:03:15.163 LINK reactor 00:03:15.163 LINK reactor_perf 00:03:15.163 LINK mem_callbacks 00:03:15.163 LINK spdk_nvme_perf 00:03:15.163 LINK led 00:03:15.163 CC app/vhost/vhost.o 00:03:15.163 LINK spdk_nvme_identify 00:03:15.163 LINK vhost_fuzz 00:03:15.163 LINK app_repeat 00:03:15.422 LINK spdk_top 00:03:15.422 LINK hello_sock 00:03:15.422 CC test/nvme/e2edp/nvme_dp.o 00:03:15.422 CC test/nvme/reset/reset.o 00:03:15.422 CC test/nvme/connect_stress/connect_stress.o 00:03:15.422 CC test/nvme/sgl/sgl.o 00:03:15.422 CC test/nvme/overhead/overhead.o 00:03:15.422 CC test/nvme/startup/startup.o 00:03:15.422 CC test/nvme/aer/aer.o 00:03:15.422 CC test/nvme/reserve/reserve.o 00:03:15.422 CC test/nvme/simple_copy/simple_copy.o 00:03:15.422 CC test/nvme/err_injection/err_injection.o 00:03:15.422 CC test/accel/dif/dif.o 00:03:15.422 LINK thread 00:03:15.422 CC test/blobfs/mkfs/mkfs.o 00:03:15.422 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.422 CC test/nvme/boot_partition/boot_partition.o 00:03:15.422 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.422 CC test/nvme/fdp/fdp.o 00:03:15.422 CC test/nvme/compliance/nvme_compliance.o 00:03:15.422 CC test/nvme/cuse/cuse.o 00:03:15.422 CC test/lvol/esnap/esnap.o 00:03:15.422 LINK idxd_perf 00:03:15.423 LINK vhost 00:03:15.681 LINK scheduler 00:03:15.681 LINK startup 00:03:15.681 LINK connect_stress 00:03:15.681 LINK boot_partition 00:03:15.681 LINK mkfs 00:03:15.681 LINK doorbell_aers 00:03:15.681 LINK err_injection 00:03:15.681 LINK overhead 00:03:15.681 LINK aer 00:03:15.681 LINK memory_ut 00:03:15.681 LINK reserve 00:03:15.940 CC examples/nvme/hotplug/hotplug.o 00:03:15.940 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.940 CC examples/nvme/arbitration/arbitration.o 00:03:15.940 CC examples/nvme/hello_world/hello_world.o 00:03:15.940 CC examples/nvme/reconnect/reconnect.o 00:03:15.940 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:15.940 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:15.940 CC examples/nvme/abort/abort.o 00:03:15.940 LINK simple_copy 00:03:15.940 CC examples/accel/perf/accel_perf.o 00:03:15.940 LINK fused_ordering 00:03:15.940 LINK reset 00:03:15.940 LINK nvme_compliance 00:03:15.940 LINK nvme_dp 00:03:15.940 CC examples/blob/hello_world/hello_blob.o 00:03:15.940 LINK sgl 00:03:15.940 CC examples/blob/cli/blobcli.o 00:03:15.940 LINK fdp 00:03:15.940 LINK dif 00:03:16.197 LINK cmb_copy 00:03:16.197 LINK hello_world 00:03:16.197 LINK pmr_persistence 00:03:16.197 LINK hotplug 00:03:16.197 LINK hello_blob 00:03:16.197 LINK abort 00:03:16.197 LINK reconnect 00:03:16.197 LINK arbitration 00:03:16.456 LINK accel_perf 00:03:16.456 LINK nvme_manage 00:03:16.456 CC test/bdev/bdevio/bdevio.o 00:03:16.456 LINK blobcli 00:03:16.713 LINK iscsi_fuzz 00:03:16.713 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.713 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.970 LINK bdevio 00:03:16.970 LINK hello_bdev 00:03:16.970 LINK cuse 00:03:17.536 LINK bdevperf 00:03:17.794 CC examples/nvmf/nvmf/nvmf.o 00:03:18.052 LINK nvmf 00:03:20.618 LINK esnap 00:03:20.876 00:03:20.876 real 0m48.793s 00:03:20.876 user 10m4.105s 00:03:20.876 sys 2m26.290s 00:03:20.876 09:17:53 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:20.876 09:17:53 make -- common/autotest_common.sh@10 -- $ set +x 00:03:20.876 ************************************ 00:03:20.876 END TEST make 00:03:20.876 ************************************ 00:03:20.876 09:17:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:20.876 09:17:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:20.876 09:17:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:20.876 09:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:20.876 09:17:53 -- pm/common@44 -- $ pid=310004 00:03:20.876 09:17:53 -- pm/common@50 -- $ kill -TERM 310004 00:03:20.876 09:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:20.876 09:17:53 -- pm/common@44 -- $ pid=310006 00:03:20.876 09:17:53 -- pm/common@50 -- $ kill -TERM 310006 00:03:20.876 09:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:20.876 09:17:53 -- pm/common@44 -- $ pid=310008 00:03:20.876 09:17:53 -- pm/common@50 -- $ kill -TERM 310008 00:03:20.876 09:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:20.876 09:17:53 -- pm/common@44 -- $ pid=310038 00:03:20.876 09:17:53 -- pm/common@50 -- $ sudo -E kill -TERM 310038 00:03:20.876 09:17:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:20.876 09:17:53 -- nvmf/common.sh@7 -- # uname -s 00:03:20.876 09:17:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:20.876 09:17:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:20.876 09:17:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:20.876 09:17:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:20.876 09:17:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:20.876 09:17:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:20.876 09:17:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:20.876 09:17:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:20.876 09:17:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:20.876 09:17:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:20.876 09:17:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:03:20.876 09:17:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:03:20.876 09:17:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:20.876 09:17:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:20.876 09:17:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:20.876 09:17:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:20.876 09:17:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:20.876 09:17:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:20.876 09:17:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.876 09:17:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.876 09:17:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.876 09:17:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.876 09:17:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.876 09:17:53 -- paths/export.sh@5 -- # export PATH 00:03:20.876 09:17:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.876 09:17:53 -- nvmf/common.sh@47 -- # : 0 00:03:20.876 09:17:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:20.876 09:17:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:20.876 09:17:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:20.876 09:17:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:20.876 09:17:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:20.876 09:17:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:20.876 09:17:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:20.876 09:17:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:20.876 09:17:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:20.876 09:17:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:20.876 09:17:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:20.876 09:17:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:20.876 09:17:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.876 09:17:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:20.876 09:17:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.876 09:17:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:20.876 09:17:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:20.876 09:17:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:20.876 09:17:53 -- spdk/autotest.sh@48 -- # udevadm_pid=365411 00:03:20.876 09:17:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:20.876 09:17:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:20.876 09:17:53 -- pm/common@17 -- # local monitor 00:03:20.876 09:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.876 09:17:53 -- pm/common@21 -- # date +%s 00:03:21.135 09:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.135 09:17:53 -- pm/common@21 -- # date +%s 00:03:21.135 09:17:53 -- pm/common@25 -- # sleep 1 00:03:21.135 09:17:53 -- pm/common@21 -- # date +%s 00:03:21.135 09:17:53 -- pm/common@21 -- # date +%s 00:03:21.135 09:17:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891873 00:03:21.135 09:17:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891873 00:03:21.135 09:17:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891873 00:03:21.135 09:17:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891873 00:03:21.135 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891873_collect-vmstat.pm.log 00:03:21.135 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891873_collect-cpu-load.pm.log 00:03:21.135 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891873_collect-cpu-temp.pm.log 00:03:21.135 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891873_collect-bmc-pm.bmc.pm.log 00:03:22.071 09:17:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.071 09:17:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.071 09:17:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.071 09:17:54 -- common/autotest_common.sh@10 -- # set +x 00:03:22.071 09:17:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.071 09:17:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:22.071 09:17:54 -- common/autotest_common.sh@10 -- # set +x 00:03:22.071 09:17:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:22.071 09:17:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.071 09:17:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.071 09:17:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:22.071 09:17:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.071 09:17:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.071 09:17:54 -- common/autotest_common.sh@1453 -- # uname 00:03:22.071 09:17:54 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:22.071 09:17:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.071 09:17:54 -- common/autotest_common.sh@1473 -- # uname 00:03:22.071 09:17:54 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:22.071 09:17:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:22.071 09:17:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:22.071 09:17:54 -- spdk/autotest.sh@72 -- # hash lcov 00:03:22.071 09:17:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:22.071 09:17:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:22.071 --rc lcov_branch_coverage=1 00:03:22.071 --rc lcov_function_coverage=1 00:03:22.071 --rc genhtml_branch_coverage=1 00:03:22.071 --rc genhtml_function_coverage=1 00:03:22.071 --rc genhtml_legend=1 00:03:22.071 --rc geninfo_all_blocks=1 00:03:22.071 ' 00:03:22.071 09:17:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:22.071 --rc lcov_branch_coverage=1 00:03:22.071 --rc lcov_function_coverage=1 00:03:22.071 --rc genhtml_branch_coverage=1 00:03:22.071 --rc genhtml_function_coverage=1 00:03:22.071 --rc genhtml_legend=1 00:03:22.071 --rc geninfo_all_blocks=1 00:03:22.071 ' 00:03:22.071 09:17:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:22.071 --rc lcov_branch_coverage=1 00:03:22.071 --rc lcov_function_coverage=1 00:03:22.071 --rc genhtml_branch_coverage=1 00:03:22.071 --rc genhtml_function_coverage=1 00:03:22.071 --rc genhtml_legend=1 00:03:22.071 --rc geninfo_all_blocks=1 00:03:22.071 --no-external' 00:03:22.071 09:17:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:22.071 --rc lcov_branch_coverage=1 00:03:22.071 --rc lcov_function_coverage=1 00:03:22.071 --rc genhtml_branch_coverage=1 00:03:22.071 --rc genhtml_function_coverage=1 00:03:22.071 --rc genhtml_legend=1 00:03:22.071 --rc geninfo_all_blocks=1 00:03:22.071 --no-external' 00:03:22.071 09:17:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:22.071 lcov: LCOV version 1.14 00:03:22.071 09:17:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:23.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:23.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:23.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:23.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:23.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:24.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:24.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:42.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:57.188 09:18:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:57.188 09:18:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.188 09:18:29 -- common/autotest_common.sh@10 -- # set +x 00:03:57.188 09:18:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:57.188 09:18:29 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.121 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:03:58.121 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:58.121 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:58.121 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:58.121 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:58.121 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:58.121 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:58.121 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:58.121 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:58.121 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:58.121 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:58.121 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:58.121 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:58.121 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:58.121 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:58.121 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:58.379 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:58.379 09:18:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:58.379 09:18:30 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:58.379 09:18:30 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:58.379 09:18:30 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:58.379 09:18:30 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:58.379 09:18:30 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:58.379 09:18:30 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:58.379 09:18:30 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.379 09:18:30 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:58.379 09:18:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:58.379 09:18:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.379 09:18:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:58.379 09:18:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:58.379 09:18:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:58.379 09:18:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:58.379 No valid GPT data, bailing 00:03:58.379 09:18:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.379 09:18:31 -- scripts/common.sh@391 -- # pt= 00:03:58.379 09:18:31 -- scripts/common.sh@392 -- # return 1 00:03:58.379 09:18:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:58.379 1+0 records in 00:03:58.379 1+0 records out 00:03:58.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00197444 s, 531 MB/s 00:03:58.379 09:18:31 -- spdk/autotest.sh@118 -- # sync 00:03:58.379 09:18:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.379 09:18:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.379 09:18:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.910 09:18:33 -- spdk/autotest.sh@124 -- # uname -s 00:04:00.910 09:18:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:00.910 09:18:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:00.910 09:18:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.910 09:18:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.910 09:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 ************************************ 00:04:00.910 START TEST setup.sh 00:04:00.910 ************************************ 00:04:00.910 09:18:33 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:00.910 * Looking for test storage... 00:04:00.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.910 09:18:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:00.910 09:18:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:00.910 09:18:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:00.910 09:18:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.910 09:18:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.910 09:18:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 ************************************ 00:04:00.910 START TEST acl 00:04:00.910 ************************************ 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:00.910 * Looking for test storage... 00:04:00.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.910 09:18:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.910 09:18:33 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:00.910 09:18:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:00.910 09:18:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:00.910 09:18:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:00.910 09:18:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:00.910 09:18:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:00.910 09:18:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.910 09:18:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.288 09:18:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:02.288 09:18:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:02.288 09:18:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.288 09:18:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:02.288 09:18:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.288 09:18:34 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:03.223 Hugepages 00:04:03.223 node hugesize free / total 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 00:04:03.223 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.223 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.482 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.483 09:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:81:00.0 == *:*:*.* ]] 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:03.483 09:18:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:03.483 09:18:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.483 09:18:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.483 09:18:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:03.483 ************************************ 00:04:03.483 START TEST denied 00:04:03.483 ************************************ 00:04:03.483 09:18:36 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:03.483 09:18:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:81:00.0' 00:04:03.483 09:18:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:03.483 09:18:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:81:00.0' 00:04:03.483 09:18:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.483 09:18:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.858 0000:81:00.0 (8086 0a54): Skipping denied controller at 0000:81:00.0 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:81:00.0 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:81:00.0 ]] 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/driver 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.858 09:18:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.386 00:04:07.386 real 0m3.812s 00:04:07.386 user 0m1.129s 00:04:07.386 sys 0m1.799s 00:04:07.386 09:18:39 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.386 09:18:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:07.386 ************************************ 00:04:07.386 END TEST denied 00:04:07.386 ************************************ 00:04:07.386 09:18:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:07.386 09:18:39 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.386 09:18:39 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.386 09:18:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:07.386 ************************************ 00:04:07.386 START TEST allowed 00:04:07.386 ************************************ 00:04:07.386 09:18:39 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:07.386 09:18:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:81:00.0 00:04:07.386 09:18:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:07.386 09:18:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:81:00.0 .*: nvme -> .*' 00:04:07.386 09:18:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.386 09:18:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.669 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.669 09:18:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:10.669 09:18:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:10.669 09:18:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:10.669 09:18:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.669 09:18:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.042 00:04:12.042 real 0m4.781s 00:04:12.042 user 0m0.966s 00:04:12.042 sys 0m1.745s 00:04:12.042 09:18:44 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.042 09:18:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:12.042 ************************************ 00:04:12.042 END TEST allowed 00:04:12.042 ************************************ 00:04:12.042 00:04:12.042 real 0m11.511s 00:04:12.042 user 0m3.275s 00:04:12.042 sys 0m5.355s 00:04:12.042 09:18:44 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.042 09:18:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.042 ************************************ 00:04:12.042 END TEST acl 00:04:12.042 ************************************ 00:04:12.042 09:18:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:12.042 09:18:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.042 09:18:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.042 09:18:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 ************************************ 00:04:12.302 START TEST hugepages 00:04:12.302 ************************************ 00:04:12.302 09:18:44 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:12.302 * Looking for test storage... 00:04:12.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.302 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 44309492 kB' 'MemAvailable: 47774000 kB' 'Buffers: 11936 kB' 'Cached: 9157640 kB' 'SwapCached: 0 kB' 'Active: 6861040 kB' 'Inactive: 3463736 kB' 'Active(anon): 6446428 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1158456 kB' 'Mapped: 152864 kB' 'Shmem: 5291228 kB' 'KReclaimable: 159568 kB' 'Slab: 452132 kB' 'SReclaimable: 159568 kB' 'SUnreclaim: 292564 kB' 'KernelStack: 12816 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7878712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193156 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.303 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.304 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:12.305 09:18:44 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:12.305 09:18:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.305 09:18:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.305 09:18:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.305 ************************************ 00:04:12.305 START TEST default_setup 00:04:12.305 ************************************ 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.305 09:18:44 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.679 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:13.679 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:13.679 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:15.584 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.584 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46405212 kB' 'MemAvailable: 49869560 kB' 'Buffers: 11936 kB' 'Cached: 9157740 kB' 'SwapCached: 0 kB' 'Active: 6879760 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465148 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176712 kB' 'Mapped: 152952 kB' 'Shmem: 5291328 kB' 'KReclaimable: 159248 kB' 'Slab: 451424 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292176 kB' 'KernelStack: 12800 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7899896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193300 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.585 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46406788 kB' 'MemAvailable: 49871136 kB' 'Buffers: 11936 kB' 'Cached: 9157740 kB' 'SwapCached: 0 kB' 'Active: 6879372 kB' 'Inactive: 3463736 kB' 'Active(anon): 6464760 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176712 kB' 'Mapped: 152952 kB' 'Shmem: 5291328 kB' 'KReclaimable: 159248 kB' 'Slab: 451408 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292160 kB' 'KernelStack: 12832 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7899916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193252 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.586 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.587 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46406324 kB' 'MemAvailable: 49870672 kB' 'Buffers: 11936 kB' 'Cached: 9157760 kB' 'SwapCached: 0 kB' 'Active: 6879432 kB' 'Inactive: 3463736 kB' 'Active(anon): 6464820 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176704 kB' 'Mapped: 152896 kB' 'Shmem: 5291348 kB' 'KReclaimable: 159248 kB' 'Slab: 451712 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292464 kB' 'KernelStack: 12816 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7899936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193236 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.588 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.589 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.590 nr_hugepages=1024 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.590 resv_hugepages=0 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.590 surplus_hugepages=0 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.590 anon_hugepages=0 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46406324 kB' 'MemAvailable: 49870672 kB' 'Buffers: 11936 kB' 'Cached: 9157784 kB' 'SwapCached: 0 kB' 'Active: 6879428 kB' 'Inactive: 3463736 kB' 'Active(anon): 6464816 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176700 kB' 'Mapped: 152896 kB' 'Shmem: 5291372 kB' 'KReclaimable: 159248 kB' 'Slab: 451712 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292464 kB' 'KernelStack: 12816 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7899960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193236 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.590 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.591 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20290508 kB' 'MemUsed: 12539376 kB' 'SwapCached: 0 kB' 'Active: 6007908 kB' 'Inactive: 3284612 kB' 'Active(anon): 5819824 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332524 kB' 'Mapped: 71004 kB' 'AnonPages: 963164 kB' 'Shmem: 4859828 kB' 'KernelStack: 8312 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121248 kB' 'Slab: 318308 kB' 'SReclaimable: 121248 kB' 'SUnreclaim: 197060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.592 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.593 node0=1024 expecting 1024 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.593 00:04:15.593 real 0m3.215s 00:04:15.593 user 0m0.644s 00:04:15.593 sys 0m0.936s 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.593 09:18:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:15.593 ************************************ 00:04:15.593 END TEST default_setup 00:04:15.593 ************************************ 00:04:15.593 09:18:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:15.593 09:18:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.593 09:18:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.593 09:18:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.593 ************************************ 00:04:15.593 START TEST per_node_1G_alloc 00:04:15.593 ************************************ 00:04:15.593 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:15.593 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:15.593 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:15.593 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.594 09:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.528 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.528 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.528 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.528 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.528 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.528 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.528 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.790 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.790 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.790 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.790 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.790 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.790 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.790 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.790 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.790 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.790 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46412392 kB' 'MemAvailable: 49876740 kB' 'Buffers: 11936 kB' 'Cached: 9157856 kB' 'SwapCached: 0 kB' 'Active: 6879860 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465248 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1177000 kB' 'Mapped: 152988 kB' 'Shmem: 5291444 kB' 'KReclaimable: 159248 kB' 'Slab: 451644 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292396 kB' 'KernelStack: 12784 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193204 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.790 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.791 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46412792 kB' 'MemAvailable: 49877140 kB' 'Buffers: 11936 kB' 'Cached: 9157860 kB' 'SwapCached: 0 kB' 'Active: 6879528 kB' 'Inactive: 3463736 kB' 'Active(anon): 6464916 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176640 kB' 'Mapped: 152924 kB' 'Shmem: 5291448 kB' 'KReclaimable: 159248 kB' 'Slab: 451636 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292388 kB' 'KernelStack: 12784 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193172 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.792 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.793 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46412936 kB' 'MemAvailable: 49877284 kB' 'Buffers: 11936 kB' 'Cached: 9157876 kB' 'SwapCached: 0 kB' 'Active: 6879640 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465028 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176792 kB' 'Mapped: 152924 kB' 'Shmem: 5291464 kB' 'KReclaimable: 159248 kB' 'Slab: 451692 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292444 kB' 'KernelStack: 12800 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193156 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.794 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.795 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.058 nr_hugepages=1024 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.058 resv_hugepages=0 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.058 surplus_hugepages=0 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.058 anon_hugepages=0 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.058 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46413224 kB' 'MemAvailable: 49877572 kB' 'Buffers: 11936 kB' 'Cached: 9157900 kB' 'SwapCached: 0 kB' 'Active: 6879704 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465092 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176864 kB' 'Mapped: 152924 kB' 'Shmem: 5291488 kB' 'KReclaimable: 159248 kB' 'Slab: 451692 kB' 'SReclaimable: 159248 kB' 'SUnreclaim: 292444 kB' 'KernelStack: 12832 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193156 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.059 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.060 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21351712 kB' 'MemUsed: 11478172 kB' 'SwapCached: 0 kB' 'Active: 6006892 kB' 'Inactive: 3284612 kB' 'Active(anon): 5818808 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332580 kB' 'Mapped: 71032 kB' 'AnonPages: 962036 kB' 'Shmem: 4859884 kB' 'KernelStack: 8280 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121248 kB' 'Slab: 318184 kB' 'SReclaimable: 121248 kB' 'SUnreclaim: 196936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.061 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711832 kB' 'MemFree: 25062132 kB' 'MemUsed: 2649700 kB' 'SwapCached: 0 kB' 'Active: 872864 kB' 'Inactive: 179124 kB' 'Active(anon): 646336 kB' 'Inactive(anon): 0 kB' 'Active(file): 226528 kB' 'Inactive(file): 179124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 837300 kB' 'Mapped: 81892 kB' 'AnonPages: 214832 kB' 'Shmem: 431648 kB' 'KernelStack: 4552 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 38000 kB' 'Slab: 133508 kB' 'SReclaimable: 38000 kB' 'SUnreclaim: 95508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.062 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.063 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.064 node0=512 expecting 512 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.064 node1=512 expecting 512 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.064 00:04:17.064 real 0m1.433s 00:04:17.064 user 0m0.602s 00:04:17.064 sys 0m0.784s 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.064 09:18:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.064 ************************************ 00:04:17.064 END TEST per_node_1G_alloc 00:04:17.064 ************************************ 00:04:17.064 09:18:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:17.064 09:18:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.064 09:18:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.064 09:18:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.064 ************************************ 00:04:17.064 START TEST even_2G_alloc 00:04:17.064 ************************************ 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.064 09:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.445 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:18.445 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.445 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:18.446 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:18.446 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:18.446 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:18.446 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:18.446 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:18.446 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.446 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:18.446 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:18.446 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:18.446 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:18.446 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:18.446 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:18.446 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:18.446 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46404312 kB' 'MemAvailable: 49868736 kB' 'Buffers: 11936 kB' 'Cached: 9157988 kB' 'SwapCached: 0 kB' 'Active: 6880096 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465484 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176992 kB' 'Mapped: 153000 kB' 'Shmem: 5291576 kB' 'KReclaimable: 159400 kB' 'Slab: 451832 kB' 'SReclaimable: 159400 kB' 'SUnreclaim: 292432 kB' 'KernelStack: 12832 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193380 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.446 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46417392 kB' 'MemAvailable: 49881816 kB' 'Buffers: 11936 kB' 'Cached: 9157992 kB' 'SwapCached: 0 kB' 'Active: 6879948 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465336 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176908 kB' 'Mapped: 152936 kB' 'Shmem: 5291580 kB' 'KReclaimable: 159400 kB' 'Slab: 451824 kB' 'SReclaimable: 159400 kB' 'SUnreclaim: 292424 kB' 'KernelStack: 12864 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193332 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.449 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46417704 kB' 'MemAvailable: 49882128 kB' 'Buffers: 11936 kB' 'Cached: 9158008 kB' 'SwapCached: 0 kB' 'Active: 6879808 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465196 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176740 kB' 'Mapped: 152936 kB' 'Shmem: 5291596 kB' 'KReclaimable: 159400 kB' 'Slab: 451892 kB' 'SReclaimable: 159400 kB' 'SUnreclaim: 292492 kB' 'KernelStack: 12832 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193332 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.452 nr_hugepages=1024 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.452 resv_hugepages=0 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.452 surplus_hugepages=0 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.452 anon_hugepages=0 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46416948 kB' 'MemAvailable: 49881372 kB' 'Buffers: 11936 kB' 'Cached: 9158032 kB' 'SwapCached: 0 kB' 'Active: 6879904 kB' 'Inactive: 3463736 kB' 'Active(anon): 6465292 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1176768 kB' 'Mapped: 152936 kB' 'Shmem: 5291620 kB' 'KReclaimable: 159400 kB' 'Slab: 451892 kB' 'SReclaimable: 159400 kB' 'SUnreclaim: 292492 kB' 'KernelStack: 12848 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7900468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193316 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.453 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21361832 kB' 'MemUsed: 11468052 kB' 'SwapCached: 0 kB' 'Active: 6007056 kB' 'Inactive: 3284612 kB' 'Active(anon): 5818972 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332628 kB' 'Mapped: 71044 kB' 'AnonPages: 962168 kB' 'Shmem: 4859932 kB' 'KernelStack: 8312 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121408 kB' 'Slab: 318324 kB' 'SReclaimable: 121408 kB' 'SUnreclaim: 196916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.454 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.455 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711832 kB' 'MemFree: 25057412 kB' 'MemUsed: 2654420 kB' 'SwapCached: 0 kB' 'Active: 872836 kB' 'Inactive: 179124 kB' 'Active(anon): 646308 kB' 'Inactive(anon): 0 kB' 'Active(file): 226528 kB' 'Inactive(file): 179124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 837360 kB' 'Mapped: 82328 kB' 'AnonPages: 214600 kB' 'Shmem: 431708 kB' 'KernelStack: 4536 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 37992 kB' 'Slab: 133568 kB' 'SReclaimable: 37992 kB' 'SUnreclaim: 95576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.456 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.457 node0=512 expecting 512 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.457 node1=512 expecting 512 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.457 00:04:18.457 real 0m1.427s 00:04:18.457 user 0m0.606s 00:04:18.457 sys 0m0.777s 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.457 09:18:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.457 ************************************ 00:04:18.457 END TEST even_2G_alloc 00:04:18.457 ************************************ 00:04:18.457 09:18:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:18.457 09:18:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.457 09:18:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.457 09:18:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.457 ************************************ 00:04:18.457 START TEST odd_alloc 00:04:18.457 ************************************ 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.457 09:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.842 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.842 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.842 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.842 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.842 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.842 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.842 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.842 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.842 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.842 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.842 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.842 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.842 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.842 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.842 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.842 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.842 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46400476 kB' 'MemAvailable: 49864872 kB' 'Buffers: 11936 kB' 'Cached: 9158116 kB' 'SwapCached: 0 kB' 'Active: 6876912 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462300 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173788 kB' 'Mapped: 152024 kB' 'Shmem: 5291704 kB' 'KReclaimable: 159344 kB' 'Slab: 451528 kB' 'SReclaimable: 159344 kB' 'SUnreclaim: 292184 kB' 'KernelStack: 13024 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7887280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193444 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.842 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.843 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46400048 kB' 'MemAvailable: 49864444 kB' 'Buffers: 11936 kB' 'Cached: 9158120 kB' 'SwapCached: 0 kB' 'Active: 6877524 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462912 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1174448 kB' 'Mapped: 151996 kB' 'Shmem: 5291708 kB' 'KReclaimable: 159344 kB' 'Slab: 451524 kB' 'SReclaimable: 159344 kB' 'SUnreclaim: 292180 kB' 'KernelStack: 13104 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7885936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193348 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.844 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.845 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46400876 kB' 'MemAvailable: 49865272 kB' 'Buffers: 11936 kB' 'Cached: 9158140 kB' 'SwapCached: 0 kB' 'Active: 6877248 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462636 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1174596 kB' 'Mapped: 151948 kB' 'Shmem: 5291728 kB' 'KReclaimable: 159344 kB' 'Slab: 451720 kB' 'SReclaimable: 159344 kB' 'SUnreclaim: 292376 kB' 'KernelStack: 13008 kB' 'PageTables: 9512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7887320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193364 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.846 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:19.847 nr_hugepages=1025 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.847 resv_hugepages=0 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.847 surplus_hugepages=0 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.847 anon_hugepages=0 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.847 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46400352 kB' 'MemAvailable: 49864748 kB' 'Buffers: 11936 kB' 'Cached: 9158140 kB' 'SwapCached: 0 kB' 'Active: 6876972 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462360 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173884 kB' 'Mapped: 151948 kB' 'Shmem: 5291728 kB' 'KReclaimable: 159344 kB' 'Slab: 451688 kB' 'SReclaimable: 159344 kB' 'SUnreclaim: 292344 kB' 'KernelStack: 13104 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7884980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193252 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.848 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.849 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21347376 kB' 'MemUsed: 11482508 kB' 'SwapCached: 0 kB' 'Active: 6004252 kB' 'Inactive: 3284612 kB' 'Active(anon): 5816168 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332668 kB' 'Mapped: 70076 kB' 'AnonPages: 959392 kB' 'Shmem: 4859972 kB' 'KernelStack: 8312 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121376 kB' 'Slab: 318276 kB' 'SReclaimable: 121376 kB' 'SUnreclaim: 196900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711832 kB' 'MemFree: 25052652 kB' 'MemUsed: 2659180 kB' 'SwapCached: 0 kB' 'Active: 871968 kB' 'Inactive: 179124 kB' 'Active(anon): 645440 kB' 'Inactive(anon): 0 kB' 'Active(file): 226528 kB' 'Inactive(file): 179124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 837452 kB' 'Mapped: 81824 kB' 'AnonPages: 213760 kB' 'Shmem: 431800 kB' 'KernelStack: 4520 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 37968 kB' 'Slab: 133484 kB' 'SReclaimable: 37968 kB' 'SUnreclaim: 95516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:19.853 node0=512 expecting 513 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:19.853 node1=513 expecting 512 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:19.853 00:04:19.853 real 0m1.414s 00:04:19.853 user 0m0.579s 00:04:19.853 sys 0m0.799s 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.853 09:18:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.853 ************************************ 00:04:19.853 END TEST odd_alloc 00:04:19.853 ************************************ 00:04:19.853 09:18:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:19.853 09:18:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.853 09:18:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.853 09:18:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 ************************************ 00:04:20.112 START TEST custom_alloc 00:04:20.112 ************************************ 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:20.112 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.113 09:18:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.047 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.048 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 45361148 kB' 'MemAvailable: 48825560 kB' 'Buffers: 11936 kB' 'Cached: 9158256 kB' 'SwapCached: 0 kB' 'Active: 6876908 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462296 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173324 kB' 'Mapped: 151920 kB' 'Shmem: 5291844 kB' 'KReclaimable: 159376 kB' 'Slab: 451736 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292360 kB' 'KernelStack: 12832 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7885184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193332 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.312 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.313 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 45361584 kB' 'MemAvailable: 48825996 kB' 'Buffers: 11936 kB' 'Cached: 9158256 kB' 'SwapCached: 0 kB' 'Active: 6877056 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462444 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173872 kB' 'Mapped: 151912 kB' 'Shmem: 5291844 kB' 'KReclaimable: 159376 kB' 'Slab: 451728 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292352 kB' 'KernelStack: 12864 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7885204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193284 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.314 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.315 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 45361380 kB' 'MemAvailable: 48825792 kB' 'Buffers: 11936 kB' 'Cached: 9158276 kB' 'SwapCached: 0 kB' 'Active: 6876608 kB' 'Inactive: 3463736 kB' 'Active(anon): 6461996 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173432 kB' 'Mapped: 151912 kB' 'Shmem: 5291864 kB' 'KReclaimable: 159376 kB' 'Slab: 451828 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292452 kB' 'KernelStack: 12832 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7885224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193284 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.316 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.317 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:21.318 nr_hugepages=1536 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.318 resv_hugepages=0 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.318 surplus_hugepages=0 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.318 anon_hugepages=0 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 45361688 kB' 'MemAvailable: 48826100 kB' 'Buffers: 11936 kB' 'Cached: 9158296 kB' 'SwapCached: 0 kB' 'Active: 6876580 kB' 'Inactive: 3463736 kB' 'Active(anon): 6461968 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173424 kB' 'Mapped: 151912 kB' 'Shmem: 5291884 kB' 'KReclaimable: 159376 kB' 'Slab: 451828 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292452 kB' 'KernelStack: 12832 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7885244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193284 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.318 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.319 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21354236 kB' 'MemUsed: 11475648 kB' 'SwapCached: 0 kB' 'Active: 6005056 kB' 'Inactive: 3284612 kB' 'Active(anon): 5816972 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332672 kB' 'Mapped: 70088 kB' 'AnonPages: 960184 kB' 'Shmem: 4859976 kB' 'KernelStack: 8280 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121408 kB' 'Slab: 318376 kB' 'SReclaimable: 121408 kB' 'SUnreclaim: 196968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.320 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.321 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711832 kB' 'MemFree: 24007536 kB' 'MemUsed: 3704296 kB' 'SwapCached: 0 kB' 'Active: 871668 kB' 'Inactive: 179124 kB' 'Active(anon): 645140 kB' 'Inactive(anon): 0 kB' 'Active(file): 226528 kB' 'Inactive(file): 179124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 837604 kB' 'Mapped: 81824 kB' 'AnonPages: 213248 kB' 'Shmem: 431952 kB' 'KernelStack: 4552 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 37968 kB' 'Slab: 133452 kB' 'SReclaimable: 37968 kB' 'SUnreclaim: 95484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.322 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.323 node0=512 expecting 512 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:21.323 node1=1024 expecting 1024 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:21.323 00:04:21.323 real 0m1.417s 00:04:21.323 user 0m0.578s 00:04:21.323 sys 0m0.799s 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.323 09:18:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.323 ************************************ 00:04:21.323 END TEST custom_alloc 00:04:21.323 ************************************ 00:04:21.323 09:18:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.323 09:18:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.323 09:18:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.323 09:18:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.619 ************************************ 00:04:21.619 START TEST no_shrink_alloc 00:04:21.619 ************************************ 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.619 09:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.555 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:22.555 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.555 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:22.555 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:22.555 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:22.555 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:22.555 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:22.555 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:22.555 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:22.555 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:22.555 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:22.555 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:22.555 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:22.555 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:22.555 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:22.555 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:22.555 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46380076 kB' 'MemAvailable: 49844488 kB' 'Buffers: 11936 kB' 'Cached: 9158380 kB' 'SwapCached: 0 kB' 'Active: 6876880 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462268 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173500 kB' 'Mapped: 152000 kB' 'Shmem: 5291968 kB' 'KReclaimable: 159376 kB' 'Slab: 451928 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292552 kB' 'KernelStack: 12880 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7885744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193380 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.822 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.823 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46390288 kB' 'MemAvailable: 49854700 kB' 'Buffers: 11936 kB' 'Cached: 9158380 kB' 'SwapCached: 0 kB' 'Active: 6877564 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462952 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1174176 kB' 'Mapped: 151936 kB' 'Shmem: 5291968 kB' 'KReclaimable: 159376 kB' 'Slab: 451896 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292520 kB' 'KernelStack: 12960 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7885760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193348 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.824 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.825 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46393252 kB' 'MemAvailable: 49857664 kB' 'Buffers: 11936 kB' 'Cached: 9158384 kB' 'SwapCached: 0 kB' 'Active: 6877124 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462512 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173744 kB' 'Mapped: 151936 kB' 'Shmem: 5291972 kB' 'KReclaimable: 159376 kB' 'Slab: 451968 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292592 kB' 'KernelStack: 12944 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7885784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193316 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.826 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.827 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.828 nr_hugepages=1024 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.828 resv_hugepages=0 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.828 surplus_hugepages=0 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.828 anon_hugepages=0 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46392244 kB' 'MemAvailable: 49856656 kB' 'Buffers: 11936 kB' 'Cached: 9158424 kB' 'SwapCached: 0 kB' 'Active: 6877124 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462512 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173748 kB' 'Mapped: 151936 kB' 'Shmem: 5292012 kB' 'KReclaimable: 159376 kB' 'Slab: 451952 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292576 kB' 'KernelStack: 12928 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7885804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193252 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.828 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.829 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20291584 kB' 'MemUsed: 12538300 kB' 'SwapCached: 0 kB' 'Active: 6006980 kB' 'Inactive: 3284612 kB' 'Active(anon): 5818896 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332696 kB' 'Mapped: 70112 kB' 'AnonPages: 962100 kB' 'Shmem: 4860000 kB' 'KernelStack: 8376 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121408 kB' 'Slab: 318336 kB' 'SReclaimable: 121408 kB' 'SUnreclaim: 196928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.830 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.831 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.832 node0=1024 expecting 1024 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.832 09:18:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.771 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.771 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.771 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.771 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.771 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.771 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.771 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.771 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.771 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.771 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.771 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.771 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.771 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.771 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.771 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.771 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.771 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.036 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.036 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46380160 kB' 'MemAvailable: 49844572 kB' 'Buffers: 11936 kB' 'Cached: 9158492 kB' 'SwapCached: 0 kB' 'Active: 6877680 kB' 'Inactive: 3463736 kB' 'Active(anon): 6463068 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1174256 kB' 'Mapped: 151828 kB' 'Shmem: 5292080 kB' 'KReclaimable: 159376 kB' 'Slab: 452004 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292628 kB' 'KernelStack: 12896 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7886056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193316 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.037 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46380848 kB' 'MemAvailable: 49845260 kB' 'Buffers: 11936 kB' 'Cached: 9158492 kB' 'SwapCached: 0 kB' 'Active: 6877236 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462624 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173792 kB' 'Mapped: 152028 kB' 'Shmem: 5292080 kB' 'KReclaimable: 159376 kB' 'Slab: 451992 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292616 kB' 'KernelStack: 12880 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7886072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193300 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.038 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.039 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46380848 kB' 'MemAvailable: 49845260 kB' 'Buffers: 11936 kB' 'Cached: 9158516 kB' 'SwapCached: 0 kB' 'Active: 6877136 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462524 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173644 kB' 'Mapped: 151948 kB' 'Shmem: 5292104 kB' 'KReclaimable: 159376 kB' 'Slab: 451924 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292548 kB' 'KernelStack: 12880 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7886096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193300 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.040 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.041 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.042 nr_hugepages=1024 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.042 resv_hugepages=0 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.042 surplus_hugepages=0 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.042 anon_hugepages=0 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 46380596 kB' 'MemAvailable: 49845008 kB' 'Buffers: 11936 kB' 'Cached: 9158536 kB' 'SwapCached: 0 kB' 'Active: 6877160 kB' 'Inactive: 3463736 kB' 'Active(anon): 6462548 kB' 'Inactive(anon): 0 kB' 'Active(file): 414612 kB' 'Inactive(file): 3463736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 1173644 kB' 'Mapped: 151948 kB' 'Shmem: 5292124 kB' 'KReclaimable: 159376 kB' 'Slab: 451924 kB' 'SReclaimable: 159376 kB' 'SUnreclaim: 292548 kB' 'KernelStack: 12880 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7886116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193300 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 349788 kB' 'DirectMap2M: 10055680 kB' 'DirectMap1G: 58720256 kB' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20274792 kB' 'MemUsed: 12555092 kB' 'SwapCached: 0 kB' 'Active: 6007908 kB' 'Inactive: 3284612 kB' 'Active(anon): 5819824 kB' 'Inactive(anon): 0 kB' 'Active(file): 188084 kB' 'Inactive(file): 3284612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8332804 kB' 'Mapped: 70124 kB' 'AnonPages: 962916 kB' 'Shmem: 4860108 kB' 'KernelStack: 8296 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121408 kB' 'Slab: 318340 kB' 'SReclaimable: 121408 kB' 'SUnreclaim: 196932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.045 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.046 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.046 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.046 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.047 node0=1024 expecting 1024 00:04:24.047 09:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.047 00:04:24.047 real 0m2.670s 00:04:24.047 user 0m1.087s 00:04:24.047 sys 0m1.502s 00:04:24.047 09:18:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.047 09:18:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 ************************************ 00:04:24.047 END TEST no_shrink_alloc 00:04:24.047 ************************************ 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.047 09:18:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.047 00:04:24.047 real 0m11.949s 00:04:24.047 user 0m4.247s 00:04:24.047 sys 0m5.838s 00:04:24.047 09:18:56 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.047 09:18:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 ************************************ 00:04:24.047 END TEST hugepages 00:04:24.047 ************************************ 00:04:24.047 09:18:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:24.047 09:18:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.047 09:18:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.047 09:18:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.306 ************************************ 00:04:24.306 START TEST driver 00:04:24.307 ************************************ 00:04:24.307 09:18:56 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:24.307 * Looking for test storage... 00:04:24.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.307 09:18:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:24.307 09:18:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.307 09:18:56 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.840 09:18:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.840 09:18:59 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.840 09:18:59 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.840 09:18:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:26.840 ************************************ 00:04:26.840 START TEST guess_driver 00:04:26.840 ************************************ 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.840 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:26.841 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:26.841 Looking for driver=vfio-pci 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.841 09:18:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.217 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.218 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.218 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.218 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.218 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.218 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.218 09:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.120 09:19:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.644 00:04:32.644 real 0m5.605s 00:04:32.644 user 0m1.099s 00:04:32.644 sys 0m1.778s 00:04:32.644 09:19:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.644 09:19:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.644 ************************************ 00:04:32.644 END TEST guess_driver 00:04:32.644 ************************************ 00:04:32.644 00:04:32.644 real 0m8.189s 00:04:32.644 user 0m1.672s 00:04:32.644 sys 0m2.823s 00:04:32.644 09:19:04 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.645 09:19:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.645 ************************************ 00:04:32.645 END TEST driver 00:04:32.645 ************************************ 00:04:32.645 09:19:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.645 09:19:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.645 09:19:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.645 09:19:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.645 ************************************ 00:04:32.645 START TEST devices 00:04:32.645 ************************************ 00:04:32.645 09:19:05 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.645 * Looking for test storage... 00:04:32.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.645 09:19:05 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:32.645 09:19:05 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.645 09:19:05 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.645 09:19:05 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.020 09:19:06 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:81:00.0 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:34.021 09:19:06 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:34.021 No valid GPT data, bailing 00:04:34.021 09:19:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:34.021 09:19:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:34.021 09:19:06 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:81:00.0 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:34.021 09:19:06 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.021 09:19:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.021 ************************************ 00:04:34.021 START TEST nvme_mount 00:04:34.021 ************************************ 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:34.021 09:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.957 Creating new GPT entries in memory. 00:04:34.957 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.957 other utilities. 00:04:34.957 09:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.957 09:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.957 09:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.957 09:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.957 09:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:36.332 Creating new GPT entries in memory. 00:04:36.332 The operation has completed successfully. 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 386182 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:81:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.332 09:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:37.266 09:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.525 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.525 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.525 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.525 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.525 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.525 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.783 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.783 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.783 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.783 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:81:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:37.783 09:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.784 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.784 09:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:38.718 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:81:00.0 data@nvme0n1 '' '' 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.977 09:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.354 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.354 00:04:40.354 real 0m6.324s 00:04:40.354 user 0m1.544s 00:04:40.354 sys 0m2.378s 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.354 09:19:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:40.354 ************************************ 00:04:40.354 END TEST nvme_mount 00:04:40.354 ************************************ 00:04:40.354 09:19:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.354 09:19:12 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.354 09:19:12 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.354 09:19:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.354 ************************************ 00:04:40.355 START TEST dm_mount 00:04:40.355 ************************************ 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:40.355 09:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.355 09:19:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.290 Creating new GPT entries in memory. 00:04:41.290 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.290 other utilities. 00:04:41.290 09:19:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.290 09:19:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.290 09:19:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.290 09:19:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.290 09:19:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.666 Creating new GPT entries in memory. 00:04:42.666 The operation has completed successfully. 00:04:42.666 09:19:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.666 09:19:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.666 09:19:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.666 09:19:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.666 09:19:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.601 The operation has completed successfully. 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 388567 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:81:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.601 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.602 09:19:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:44.534 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:81:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.792 09:19:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.166 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.167 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.167 00:04:46.167 real 0m5.723s 00:04:46.167 user 0m1.041s 00:04:46.167 sys 0m1.538s 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.167 09:19:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.167 ************************************ 00:04:46.167 END TEST dm_mount 00:04:46.167 ************************************ 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.167 09:19:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.425 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.425 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.425 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.425 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.425 09:19:19 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:46.425 00:04:46.425 real 0m14.002s 00:04:46.425 user 0m3.234s 00:04:46.425 sys 0m4.987s 00:04:46.425 09:19:19 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.425 09:19:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.425 ************************************ 00:04:46.425 END TEST devices 00:04:46.425 ************************************ 00:04:46.425 00:04:46.425 real 0m45.889s 00:04:46.425 user 0m12.520s 00:04:46.425 sys 0m19.164s 00:04:46.425 09:19:19 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.425 09:19:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.425 ************************************ 00:04:46.425 END TEST setup.sh 00:04:46.425 ************************************ 00:04:46.425 09:19:19 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:47.798 Hugepages 00:04:47.798 node hugesize free / total 00:04:47.798 node0 1048576kB 0 / 0 00:04:47.798 node0 2048kB 2048 / 2048 00:04:47.798 node1 1048576kB 0 / 0 00:04:47.798 node1 2048kB 0 / 0 00:04:47.798 00:04:47.798 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.798 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:47.798 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:47.798 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:47.798 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:47.798 09:19:20 -- spdk/autotest.sh@130 -- # uname -s 00:04:47.798 09:19:20 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:47.798 09:19:20 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:47.798 09:19:20 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.732 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.732 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.732 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.732 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.732 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.990 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.990 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.990 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.990 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.887 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:50.887 09:19:23 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:51.819 09:19:24 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:51.819 09:19:24 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:51.819 09:19:24 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.819 09:19:24 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:51.819 09:19:24 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:51.819 09:19:24 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:51.819 09:19:24 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.819 09:19:24 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:51.819 09:19:24 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:51.819 09:19:24 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:51.819 09:19:24 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:81:00.0 00:04:51.819 09:19:24 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.753 Waiting for block devices as requested 00:04:53.011 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.011 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:53.269 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:53.269 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:53.269 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:53.269 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:53.527 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:53.527 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:53.527 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:53.527 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:53.784 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:53.784 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:53.784 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:53.784 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:54.043 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:54.043 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:54.043 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:54.302 09:19:26 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:54.302 09:19:26 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:81:00.0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1500 -- # grep 0000:81:00.0/nvme/nvme 00:04:54.302 09:19:26 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 ]] 00:04:54.302 09:19:26 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:54.302 09:19:26 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:54.302 09:19:26 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:54.302 09:19:26 -- common/autotest_common.sh@1543 -- # oacs=' 0xe' 00:04:54.302 09:19:26 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:54.302 09:19:26 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:54.302 09:19:26 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:54.302 09:19:26 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:54.302 09:19:26 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:54.302 09:19:26 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:54.302 09:19:26 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:54.302 09:19:26 -- common/autotest_common.sh@1555 -- # continue 00:04:54.302 09:19:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:54.302 09:19:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.302 09:19:26 -- common/autotest_common.sh@10 -- # set +x 00:04:54.302 09:19:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:54.302 09:19:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.302 09:19:26 -- common/autotest_common.sh@10 -- # set +x 00:04:54.302 09:19:26 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.679 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:55.679 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:55.679 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.579 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.579 09:19:30 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:57.579 09:19:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.579 09:19:30 -- common/autotest_common.sh@10 -- # set +x 00:04:57.579 09:19:30 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:57.579 09:19:30 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:57.579 09:19:30 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.580 09:19:30 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:57.580 09:19:30 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:57.580 09:19:30 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:57.580 09:19:30 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:57.580 09:19:30 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:57.580 09:19:30 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.580 09:19:30 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.580 09:19:30 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:57.580 09:19:30 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:57.580 09:19:30 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:81:00.0 00:04:57.580 09:19:30 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:57.580 09:19:30 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:81:00.0/device 00:04:57.580 09:19:30 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:04:57.580 09:19:30 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:57.580 09:19:30 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:04:57.580 09:19:30 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:81:00.0 00:04:57.580 09:19:30 -- common/autotest_common.sh@1590 -- # [[ -z 0000:81:00.0 ]] 00:04:57.580 09:19:30 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=393767 00:04:57.580 09:19:30 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.580 09:19:30 -- common/autotest_common.sh@1596 -- # waitforlisten 393767 00:04:57.580 09:19:30 -- common/autotest_common.sh@829 -- # '[' -z 393767 ']' 00:04:57.580 09:19:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.580 09:19:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.580 09:19:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.580 09:19:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.580 09:19:30 -- common/autotest_common.sh@10 -- # set +x 00:04:57.580 [2024-07-25 09:19:30.290785] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:04:57.580 [2024-07-25 09:19:30.290868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393767 ] 00:04:57.838 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.838 [2024-07-25 09:19:30.354147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.838 [2024-07-25 09:19:30.474302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.095 09:19:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.095 09:19:30 -- common/autotest_common.sh@862 -- # return 0 00:04:58.095 09:19:30 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:04:58.095 09:19:30 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:04:58.095 09:19:30 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:81:00.0 00:05:01.384 nvme0n1 00:05:01.384 09:19:33 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:01.384 [2024-07-25 09:19:34.073541] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:01.384 request: 00:05:01.384 { 00:05:01.384 "nvme_ctrlr_name": "nvme0", 00:05:01.384 "password": "test", 00:05:01.384 "method": "bdev_nvme_opal_revert", 00:05:01.384 "req_id": 1 00:05:01.384 } 00:05:01.384 Got JSON-RPC error response 00:05:01.384 response: 00:05:01.384 { 00:05:01.384 "code": -32602, 00:05:01.384 "message": "Invalid parameters" 00:05:01.384 } 00:05:01.384 09:19:34 -- common/autotest_common.sh@1602 -- # true 00:05:01.384 09:19:34 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:05:01.384 09:19:34 -- common/autotest_common.sh@1606 -- # killprocess 393767 00:05:01.384 09:19:34 -- common/autotest_common.sh@948 -- # '[' -z 393767 ']' 00:05:01.384 09:19:34 -- common/autotest_common.sh@952 -- # kill -0 393767 00:05:01.384 09:19:34 -- common/autotest_common.sh@953 -- # uname 00:05:01.384 09:19:34 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.384 09:19:34 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 393767 00:05:01.384 09:19:34 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.384 09:19:34 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.384 09:19:34 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 393767' 00:05:01.384 killing process with pid 393767 00:05:01.384 09:19:34 -- common/autotest_common.sh@967 -- # kill 393767 00:05:01.384 09:19:34 -- common/autotest_common.sh@972 -- # wait 393767 00:05:04.662 09:19:36 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:04.662 09:19:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:04.662 09:19:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.662 09:19:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.662 09:19:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:04.662 09:19:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.662 09:19:36 -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 09:19:36 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:04.662 09:19:36 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.662 09:19:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.662 09:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.662 09:19:36 -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 ************************************ 00:05:04.662 START TEST env 00:05:04.662 ************************************ 00:05:04.662 09:19:36 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.662 * Looking for test storage... 00:05:04.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:04.662 09:19:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.662 09:19:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.662 09:19:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.662 09:19:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 ************************************ 00:05:04.662 START TEST env_memory 00:05:04.662 ************************************ 00:05:04.662 09:19:36 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.662 00:05:04.662 00:05:04.662 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.662 http://cunit.sourceforge.net/ 00:05:04.662 00:05:04.662 00:05:04.662 Suite: memory 00:05:04.662 Test: alloc and free memory map ...[2024-07-25 09:19:36.941690] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.662 passed 00:05:04.662 Test: mem map translation ...[2024-07-25 09:19:36.963315] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.662 [2024-07-25 09:19:36.963351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.662 [2024-07-25 09:19:36.963405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.662 [2024-07-25 09:19:36.963418] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.662 passed 00:05:04.662 Test: mem map registration ...[2024-07-25 09:19:37.006850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:04.662 [2024-07-25 09:19:37.006870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:04.662 passed 00:05:04.662 Test: mem map adjacent registrations ...passed 00:05:04.662 00:05:04.662 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.662 suites 1 1 n/a 0 0 00:05:04.662 tests 4 4 4 0 0 00:05:04.662 asserts 152 152 152 0 n/a 00:05:04.662 00:05:04.662 Elapsed time = 0.146 seconds 00:05:04.662 00:05:04.662 real 0m0.153s 00:05:04.662 user 0m0.147s 00:05:04.662 sys 0m0.006s 00:05:04.662 09:19:37 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.662 09:19:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 ************************************ 00:05:04.662 END TEST env_memory 00:05:04.662 ************************************ 00:05:04.662 09:19:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.662 09:19:37 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.662 09:19:37 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.663 09:19:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.663 ************************************ 00:05:04.663 START TEST env_vtophys 00:05:04.663 ************************************ 00:05:04.663 09:19:37 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.663 EAL: lib.eal log level changed from notice to debug 00:05:04.663 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.663 EAL: Detected lcore 1 as core 1 on socket 0 00:05:04.663 EAL: Detected lcore 2 as core 2 on socket 0 00:05:04.663 EAL: Detected lcore 3 as core 3 on socket 0 00:05:04.663 EAL: Detected lcore 4 as core 4 on socket 0 00:05:04.663 EAL: Detected lcore 5 as core 5 on socket 0 00:05:04.663 EAL: Detected lcore 6 as core 8 on socket 0 00:05:04.663 EAL: Detected lcore 7 as core 9 on socket 0 00:05:04.663 EAL: Detected lcore 8 as core 10 on socket 0 00:05:04.663 EAL: Detected lcore 9 as core 11 on socket 0 00:05:04.663 EAL: Detected lcore 10 as core 12 on socket 0 00:05:04.663 EAL: Detected lcore 11 as core 13 on socket 0 00:05:04.663 EAL: Detected lcore 12 as core 0 on socket 1 00:05:04.663 EAL: Detected lcore 13 as core 1 on socket 1 00:05:04.663 EAL: Detected lcore 14 as core 2 on socket 1 00:05:04.663 EAL: Detected lcore 15 as core 3 on socket 1 00:05:04.663 EAL: Detected lcore 16 as core 4 on socket 1 00:05:04.663 EAL: Detected lcore 17 as core 5 on socket 1 00:05:04.663 EAL: Detected lcore 18 as core 8 on socket 1 00:05:04.663 EAL: Detected lcore 19 as core 9 on socket 1 00:05:04.663 EAL: Detected lcore 20 as core 10 on socket 1 00:05:04.663 EAL: Detected lcore 21 as core 11 on socket 1 00:05:04.663 EAL: Detected lcore 22 as core 12 on socket 1 00:05:04.663 EAL: Detected lcore 23 as core 13 on socket 1 00:05:04.663 EAL: Detected lcore 24 as core 0 on socket 0 00:05:04.663 EAL: Detected lcore 25 as core 1 on socket 0 00:05:04.663 EAL: Detected lcore 26 as core 2 on socket 0 00:05:04.663 EAL: Detected lcore 27 as core 3 on socket 0 00:05:04.663 EAL: Detected lcore 28 as core 4 on socket 0 00:05:04.663 EAL: Detected lcore 29 as core 5 on socket 0 00:05:04.663 EAL: Detected lcore 30 as core 8 on socket 0 00:05:04.663 EAL: Detected lcore 31 as core 9 on socket 0 00:05:04.663 EAL: Detected lcore 32 as core 10 on socket 0 00:05:04.663 EAL: Detected lcore 33 as core 11 on socket 0 00:05:04.663 EAL: Detected lcore 34 as core 12 on socket 0 00:05:04.663 EAL: Detected lcore 35 as core 13 on socket 0 00:05:04.663 EAL: Detected lcore 36 as core 0 on socket 1 00:05:04.663 EAL: Detected lcore 37 as core 1 on socket 1 00:05:04.663 EAL: Detected lcore 38 as core 2 on socket 1 00:05:04.663 EAL: Detected lcore 39 as core 3 on socket 1 00:05:04.663 EAL: Detected lcore 40 as core 4 on socket 1 00:05:04.663 EAL: Detected lcore 41 as core 5 on socket 1 00:05:04.663 EAL: Detected lcore 42 as core 8 on socket 1 00:05:04.663 EAL: Detected lcore 43 as core 9 on socket 1 00:05:04.663 EAL: Detected lcore 44 as core 10 on socket 1 00:05:04.663 EAL: Detected lcore 45 as core 11 on socket 1 00:05:04.663 EAL: Detected lcore 46 as core 12 on socket 1 00:05:04.663 EAL: Detected lcore 47 as core 13 on socket 1 00:05:04.663 EAL: Maximum logical cores by configuration: 128 00:05:04.663 EAL: Detected CPU lcores: 48 00:05:04.663 EAL: Detected NUMA nodes: 2 00:05:04.663 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:04.663 EAL: Detected shared linkage of DPDK 00:05:04.663 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.663 EAL: Bus pci wants IOVA as 'DC' 00:05:04.663 EAL: Buses did not request a specific IOVA mode. 00:05:04.663 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:04.663 EAL: Selected IOVA mode 'VA' 00:05:04.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.663 EAL: Probing VFIO support... 00:05:04.663 EAL: IOMMU type 1 (Type 1) is supported 00:05:04.663 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:04.663 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:04.663 EAL: VFIO support initialized 00:05:04.663 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.663 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.663 EAL: Setting up physically contiguous memory... 00:05:04.663 EAL: Setting maximum number of open files to 524288 00:05:04.663 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.663 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:04.663 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.663 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:04.663 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.663 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:04.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.663 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.663 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:04.663 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:04.663 EAL: Hugepages will be freed exactly as allocated. 00:05:04.663 EAL: No shared files mode enabled, IPC is disabled 00:05:04.663 EAL: No shared files mode enabled, IPC is disabled 00:05:04.663 EAL: TSC frequency is ~2700000 KHz 00:05:04.663 EAL: Main lcore 0 is ready (tid=7f7510c95a00;cpuset=[0]) 00:05:04.663 EAL: Trying to obtain current memory policy. 00:05:04.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.663 EAL: Restoring previous memory policy: 0 00:05:04.663 EAL: request: mp_malloc_sync 00:05:04.663 EAL: No shared files mode enabled, IPC is disabled 00:05:04.663 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.663 EAL: No shared files mode enabled, IPC is disabled 00:05:04.663 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.663 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.663 00:05:04.663 00:05:04.663 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.663 http://cunit.sourceforge.net/ 00:05:04.663 00:05:04.663 00:05:04.663 Suite: components_suite 00:05:04.663 Test: vtophys_malloc_test ...passed 00:05:04.663 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.663 EAL: Restoring previous memory policy: 4 00:05:04.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.663 EAL: request: mp_malloc_sync 00:05:04.663 EAL: No shared files mode enabled, IPC is disabled 00:05:04.663 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.664 EAL: request: mp_malloc_sync 00:05:04.664 EAL: No shared files mode enabled, IPC is disabled 00:05:04.664 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.664 EAL: Trying to obtain current memory policy. 00:05:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.664 EAL: Restoring previous memory policy: 4 00:05:04.922 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.922 EAL: request: mp_malloc_sync 00:05:04.922 EAL: No shared files mode enabled, IPC is disabled 00:05:04.922 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.922 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.922 EAL: request: mp_malloc_sync 00:05:04.922 EAL: No shared files mode enabled, IPC is disabled 00:05:04.922 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.922 EAL: Trying to obtain current memory policy. 00:05:04.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.922 EAL: Restoring previous memory policy: 4 00:05:04.922 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.922 EAL: request: mp_malloc_sync 00:05:04.922 EAL: No shared files mode enabled, IPC is disabled 00:05:04.922 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.180 EAL: request: mp_malloc_sync 00:05:05.180 EAL: No shared files mode enabled, IPC is disabled 00:05:05.180 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.180 EAL: Trying to obtain current memory policy. 00:05:05.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.438 EAL: Restoring previous memory policy: 4 00:05:05.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.438 EAL: request: mp_malloc_sync 00:05:05.438 EAL: No shared files mode enabled, IPC is disabled 00:05:05.438 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.696 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.954 EAL: request: mp_malloc_sync 00:05:05.954 EAL: No shared files mode enabled, IPC is disabled 00:05:05.954 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.954 passed 00:05:05.954 00:05:05.954 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.954 suites 1 1 n/a 0 0 00:05:05.954 tests 2 2 2 0 0 00:05:05.954 asserts 497 497 497 0 n/a 00:05:05.954 00:05:05.954 Elapsed time = 1.361 seconds 00:05:05.954 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.954 EAL: request: mp_malloc_sync 00:05:05.954 EAL: No shared files mode enabled, IPC is disabled 00:05:05.954 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.954 EAL: No shared files mode enabled, IPC is disabled 00:05:05.954 EAL: No shared files mode enabled, IPC is disabled 00:05:05.954 EAL: No shared files mode enabled, IPC is disabled 00:05:05.954 00:05:05.954 real 0m1.478s 00:05:05.954 user 0m0.843s 00:05:05.954 sys 0m0.597s 00:05:05.954 09:19:38 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.954 09:19:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.954 ************************************ 00:05:05.954 END TEST env_vtophys 00:05:05.954 ************************************ 00:05:05.954 09:19:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.954 09:19:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.954 09:19:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.954 09:19:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.954 ************************************ 00:05:05.954 START TEST env_pci 00:05:05.954 ************************************ 00:05:05.954 09:19:38 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.954 00:05:05.954 00:05:05.954 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.954 http://cunit.sourceforge.net/ 00:05:05.954 00:05:05.954 00:05:05.954 Suite: pci 00:05:05.954 Test: pci_hook ...[2024-07-25 09:19:38.638817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 394789 has claimed it 00:05:05.954 EAL: Cannot find device (10000:00:01.0) 00:05:05.954 EAL: Failed to attach device on primary process 00:05:05.954 passed 00:05:05.954 00:05:05.954 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.954 suites 1 1 n/a 0 0 00:05:05.954 tests 1 1 1 0 0 00:05:05.954 asserts 25 25 25 0 n/a 00:05:05.954 00:05:05.954 Elapsed time = 0.021 seconds 00:05:05.954 00:05:05.954 real 0m0.033s 00:05:05.954 user 0m0.010s 00:05:05.954 sys 0m0.023s 00:05:05.954 09:19:38 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.954 09:19:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.955 ************************************ 00:05:05.955 END TEST env_pci 00:05:05.955 ************************************ 00:05:05.955 09:19:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.955 09:19:38 env -- env/env.sh@15 -- # uname 00:05:05.955 09:19:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.955 09:19:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.955 09:19:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.955 09:19:38 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:05.955 09:19:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.955 09:19:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.214 ************************************ 00:05:06.214 START TEST env_dpdk_post_init 00:05:06.214 ************************************ 00:05:06.214 09:19:38 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.214 EAL: Detected CPU lcores: 48 00:05:06.214 EAL: Detected NUMA nodes: 2 00:05:06.214 EAL: Detected shared linkage of DPDK 00:05:06.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.214 EAL: Selected IOVA mode 'VA' 00:05:06.214 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.214 EAL: VFIO support initialized 00:05:06.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.214 EAL: Using IOMMU type 1 (Type 1) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:06.214 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:06.474 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:06.474 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:06.474 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:06.474 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:06.474 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:07.041 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:81:00.0 (socket 1) 00:05:11.222 EAL: Releasing PCI mapped resource for 0000:81:00.0 00:05:11.222 EAL: Calling pci_unmap_resource for 0000:81:00.0 at 0x202001040000 00:05:11.222 Starting DPDK initialization... 00:05:11.222 Starting SPDK post initialization... 00:05:11.222 SPDK NVMe probe 00:05:11.222 Attaching to 0000:81:00.0 00:05:11.222 Attached to 0000:81:00.0 00:05:11.222 Cleaning up... 00:05:11.222 00:05:11.222 real 0m5.212s 00:05:11.222 user 0m3.963s 00:05:11.222 sys 0m0.298s 00:05:11.222 09:19:43 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.222 09:19:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.222 ************************************ 00:05:11.222 END TEST env_dpdk_post_init 00:05:11.222 ************************************ 00:05:11.222 09:19:43 env -- env/env.sh@26 -- # uname 00:05:11.222 09:19:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.222 09:19:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.222 09:19:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.222 09:19:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.222 09:19:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.479 ************************************ 00:05:11.479 START TEST env_mem_callbacks 00:05:11.479 ************************************ 00:05:11.479 09:19:43 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.479 EAL: Detected CPU lcores: 48 00:05:11.479 EAL: Detected NUMA nodes: 2 00:05:11.479 EAL: Detected shared linkage of DPDK 00:05:11.479 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.479 EAL: Selected IOVA mode 'VA' 00:05:11.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.479 EAL: VFIO support initialized 00:05:11.479 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.479 00:05:11.479 00:05:11.479 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.479 http://cunit.sourceforge.net/ 00:05:11.479 00:05:11.479 00:05:11.479 Suite: memory 00:05:11.479 Test: test ... 00:05:11.479 register 0x200000200000 2097152 00:05:11.479 malloc 3145728 00:05:11.479 register 0x200000400000 4194304 00:05:11.479 buf 0x200000500000 len 3145728 PASSED 00:05:11.479 malloc 64 00:05:11.479 buf 0x2000004fff40 len 64 PASSED 00:05:11.479 malloc 4194304 00:05:11.479 register 0x200000800000 6291456 00:05:11.479 buf 0x200000a00000 len 4194304 PASSED 00:05:11.479 free 0x200000500000 3145728 00:05:11.479 free 0x2000004fff40 64 00:05:11.479 unregister 0x200000400000 4194304 PASSED 00:05:11.479 free 0x200000a00000 4194304 00:05:11.479 unregister 0x200000800000 6291456 PASSED 00:05:11.479 malloc 8388608 00:05:11.479 register 0x200000400000 10485760 00:05:11.479 buf 0x200000600000 len 8388608 PASSED 00:05:11.479 free 0x200000600000 8388608 00:05:11.479 unregister 0x200000400000 10485760 PASSED 00:05:11.479 passed 00:05:11.479 00:05:11.479 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.479 suites 1 1 n/a 0 0 00:05:11.479 tests 1 1 1 0 0 00:05:11.479 asserts 15 15 15 0 n/a 00:05:11.479 00:05:11.479 Elapsed time = 0.005 seconds 00:05:11.479 00:05:11.479 real 0m0.049s 00:05:11.479 user 0m0.017s 00:05:11.479 sys 0m0.032s 00:05:11.479 09:19:44 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.479 09:19:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.479 ************************************ 00:05:11.479 END TEST env_mem_callbacks 00:05:11.479 ************************************ 00:05:11.479 00:05:11.479 real 0m7.196s 00:05:11.479 user 0m5.102s 00:05:11.479 sys 0m1.121s 00:05:11.479 09:19:44 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.479 09:19:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.479 ************************************ 00:05:11.479 END TEST env 00:05:11.479 ************************************ 00:05:11.479 09:19:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.479 09:19:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.479 09:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.479 09:19:44 -- common/autotest_common.sh@10 -- # set +x 00:05:11.479 ************************************ 00:05:11.479 START TEST rpc 00:05:11.479 ************************************ 00:05:11.479 09:19:44 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.479 * Looking for test storage... 00:05:11.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.479 09:19:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=395576 00:05:11.479 09:19:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:11.479 09:19:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.479 09:19:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 395576 00:05:11.479 09:19:44 rpc -- common/autotest_common.sh@829 -- # '[' -z 395576 ']' 00:05:11.479 09:19:44 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.479 09:19:44 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.479 09:19:44 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.480 09:19:44 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.480 09:19:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.480 [2024-07-25 09:19:44.181047] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:11.480 [2024-07-25 09:19:44.181137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395576 ] 00:05:11.480 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.737 [2024-07-25 09:19:44.242836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.737 [2024-07-25 09:19:44.359926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.737 [2024-07-25 09:19:44.359995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 395576' to capture a snapshot of events at runtime. 00:05:11.737 [2024-07-25 09:19:44.360012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.737 [2024-07-25 09:19:44.360025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.737 [2024-07-25 09:19:44.360037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid395576 for offline analysis/debug. 00:05:11.737 [2024-07-25 09:19:44.360074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.994 09:19:44 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.994 09:19:44 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:11.994 09:19:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.994 09:19:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.994 09:19:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:11.994 09:19:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:11.994 09:19:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.994 09:19:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.994 09:19:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.994 ************************************ 00:05:11.994 START TEST rpc_integrity 00:05:11.994 ************************************ 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.994 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.994 { 00:05:11.994 "name": "Malloc0", 00:05:11.994 "aliases": [ 00:05:11.994 "c0415ce8-8f2b-4960-90bf-ac9c7930db14" 00:05:11.994 ], 00:05:11.994 "product_name": "Malloc disk", 00:05:11.994 "block_size": 512, 00:05:11.994 "num_blocks": 16384, 00:05:11.994 "uuid": "c0415ce8-8f2b-4960-90bf-ac9c7930db14", 00:05:11.994 "assigned_rate_limits": { 00:05:11.994 "rw_ios_per_sec": 0, 00:05:11.994 "rw_mbytes_per_sec": 0, 00:05:11.994 "r_mbytes_per_sec": 0, 00:05:11.994 "w_mbytes_per_sec": 0 00:05:11.994 }, 00:05:11.994 "claimed": false, 00:05:11.994 "zoned": false, 00:05:11.994 "supported_io_types": { 00:05:11.994 "read": true, 00:05:11.994 "write": true, 00:05:11.994 "unmap": true, 00:05:11.994 "flush": true, 00:05:11.994 "reset": true, 00:05:11.994 "nvme_admin": false, 00:05:11.994 "nvme_io": false, 00:05:11.994 "nvme_io_md": false, 00:05:11.994 "write_zeroes": true, 00:05:11.994 "zcopy": true, 00:05:11.994 "get_zone_info": false, 00:05:11.994 "zone_management": false, 00:05:11.994 "zone_append": false, 00:05:11.994 "compare": false, 00:05:11.994 "compare_and_write": false, 00:05:11.994 "abort": true, 00:05:11.994 "seek_hole": false, 00:05:11.994 "seek_data": false, 00:05:11.994 "copy": true, 00:05:11.994 "nvme_iov_md": false 00:05:11.994 }, 00:05:11.994 "memory_domains": [ 00:05:11.994 { 00:05:11.994 "dma_device_id": "system", 00:05:11.994 "dma_device_type": 1 00:05:11.994 }, 00:05:11.994 { 00:05:11.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.994 "dma_device_type": 2 00:05:11.994 } 00:05:11.994 ], 00:05:11.994 "driver_specific": {} 00:05:11.994 } 00:05:11.994 ]' 00:05:11.994 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.252 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 [2024-07-25 09:19:44.762473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.253 [2024-07-25 09:19:44.762519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.253 [2024-07-25 09:19:44.762544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd72d50 00:05:12.253 [2024-07-25 09:19:44.762559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.253 [2024-07-25 09:19:44.764095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.253 [2024-07-25 09:19:44.764122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.253 Passthru0 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.253 { 00:05:12.253 "name": "Malloc0", 00:05:12.253 "aliases": [ 00:05:12.253 "c0415ce8-8f2b-4960-90bf-ac9c7930db14" 00:05:12.253 ], 00:05:12.253 "product_name": "Malloc disk", 00:05:12.253 "block_size": 512, 00:05:12.253 "num_blocks": 16384, 00:05:12.253 "uuid": "c0415ce8-8f2b-4960-90bf-ac9c7930db14", 00:05:12.253 "assigned_rate_limits": { 00:05:12.253 "rw_ios_per_sec": 0, 00:05:12.253 "rw_mbytes_per_sec": 0, 00:05:12.253 "r_mbytes_per_sec": 0, 00:05:12.253 "w_mbytes_per_sec": 0 00:05:12.253 }, 00:05:12.253 "claimed": true, 00:05:12.253 "claim_type": "exclusive_write", 00:05:12.253 "zoned": false, 00:05:12.253 "supported_io_types": { 00:05:12.253 "read": true, 00:05:12.253 "write": true, 00:05:12.253 "unmap": true, 00:05:12.253 "flush": true, 00:05:12.253 "reset": true, 00:05:12.253 "nvme_admin": false, 00:05:12.253 "nvme_io": false, 00:05:12.253 "nvme_io_md": false, 00:05:12.253 "write_zeroes": true, 00:05:12.253 "zcopy": true, 00:05:12.253 "get_zone_info": false, 00:05:12.253 "zone_management": false, 00:05:12.253 "zone_append": false, 00:05:12.253 "compare": false, 00:05:12.253 "compare_and_write": false, 00:05:12.253 "abort": true, 00:05:12.253 "seek_hole": false, 00:05:12.253 "seek_data": false, 00:05:12.253 "copy": true, 00:05:12.253 "nvme_iov_md": false 00:05:12.253 }, 00:05:12.253 "memory_domains": [ 00:05:12.253 { 00:05:12.253 "dma_device_id": "system", 00:05:12.253 "dma_device_type": 1 00:05:12.253 }, 00:05:12.253 { 00:05:12.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.253 "dma_device_type": 2 00:05:12.253 } 00:05:12.253 ], 00:05:12.253 "driver_specific": {} 00:05:12.253 }, 00:05:12.253 { 00:05:12.253 "name": "Passthru0", 00:05:12.253 "aliases": [ 00:05:12.253 "c9d70c89-947f-5b16-98bf-9f131fa46f28" 00:05:12.253 ], 00:05:12.253 "product_name": "passthru", 00:05:12.253 "block_size": 512, 00:05:12.253 "num_blocks": 16384, 00:05:12.253 "uuid": "c9d70c89-947f-5b16-98bf-9f131fa46f28", 00:05:12.253 "assigned_rate_limits": { 00:05:12.253 "rw_ios_per_sec": 0, 00:05:12.253 "rw_mbytes_per_sec": 0, 00:05:12.253 "r_mbytes_per_sec": 0, 00:05:12.253 "w_mbytes_per_sec": 0 00:05:12.253 }, 00:05:12.253 "claimed": false, 00:05:12.253 "zoned": false, 00:05:12.253 "supported_io_types": { 00:05:12.253 "read": true, 00:05:12.253 "write": true, 00:05:12.253 "unmap": true, 00:05:12.253 "flush": true, 00:05:12.253 "reset": true, 00:05:12.253 "nvme_admin": false, 00:05:12.253 "nvme_io": false, 00:05:12.253 "nvme_io_md": false, 00:05:12.253 "write_zeroes": true, 00:05:12.253 "zcopy": true, 00:05:12.253 "get_zone_info": false, 00:05:12.253 "zone_management": false, 00:05:12.253 "zone_append": false, 00:05:12.253 "compare": false, 00:05:12.253 "compare_and_write": false, 00:05:12.253 "abort": true, 00:05:12.253 "seek_hole": false, 00:05:12.253 "seek_data": false, 00:05:12.253 "copy": true, 00:05:12.253 "nvme_iov_md": false 00:05:12.253 }, 00:05:12.253 "memory_domains": [ 00:05:12.253 { 00:05:12.253 "dma_device_id": "system", 00:05:12.253 "dma_device_type": 1 00:05:12.253 }, 00:05:12.253 { 00:05:12.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.253 "dma_device_type": 2 00:05:12.253 } 00:05:12.253 ], 00:05:12.253 "driver_specific": { 00:05:12.253 "passthru": { 00:05:12.253 "name": "Passthru0", 00:05:12.253 "base_bdev_name": "Malloc0" 00:05:12.253 } 00:05:12.253 } 00:05:12.253 } 00:05:12.253 ]' 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.253 09:19:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.253 00:05:12.253 real 0m0.233s 00:05:12.253 user 0m0.150s 00:05:12.253 sys 0m0.027s 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 ************************************ 00:05:12.253 END TEST rpc_integrity 00:05:12.253 ************************************ 00:05:12.253 09:19:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.253 09:19:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.253 09:19:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.253 09:19:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 ************************************ 00:05:12.253 START TEST rpc_plugins 00:05:12.253 ************************************ 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:12.253 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.253 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.253 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.253 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.253 { 00:05:12.253 "name": "Malloc1", 00:05:12.253 "aliases": [ 00:05:12.253 "6bb0d99b-9232-4bc6-b333-15c3b46aee80" 00:05:12.253 ], 00:05:12.253 "product_name": "Malloc disk", 00:05:12.253 "block_size": 4096, 00:05:12.253 "num_blocks": 256, 00:05:12.253 "uuid": "6bb0d99b-9232-4bc6-b333-15c3b46aee80", 00:05:12.253 "assigned_rate_limits": { 00:05:12.253 "rw_ios_per_sec": 0, 00:05:12.253 "rw_mbytes_per_sec": 0, 00:05:12.253 "r_mbytes_per_sec": 0, 00:05:12.253 "w_mbytes_per_sec": 0 00:05:12.253 }, 00:05:12.253 "claimed": false, 00:05:12.253 "zoned": false, 00:05:12.253 "supported_io_types": { 00:05:12.253 "read": true, 00:05:12.253 "write": true, 00:05:12.253 "unmap": true, 00:05:12.253 "flush": true, 00:05:12.253 "reset": true, 00:05:12.253 "nvme_admin": false, 00:05:12.253 "nvme_io": false, 00:05:12.253 "nvme_io_md": false, 00:05:12.253 "write_zeroes": true, 00:05:12.253 "zcopy": true, 00:05:12.253 "get_zone_info": false, 00:05:12.253 "zone_management": false, 00:05:12.253 "zone_append": false, 00:05:12.253 "compare": false, 00:05:12.253 "compare_and_write": false, 00:05:12.253 "abort": true, 00:05:12.253 "seek_hole": false, 00:05:12.253 "seek_data": false, 00:05:12.253 "copy": true, 00:05:12.253 "nvme_iov_md": false 00:05:12.253 }, 00:05:12.253 "memory_domains": [ 00:05:12.253 { 00:05:12.253 "dma_device_id": "system", 00:05:12.253 "dma_device_type": 1 00:05:12.253 }, 00:05:12.253 { 00:05:12.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.253 "dma_device_type": 2 00:05:12.253 } 00:05:12.253 ], 00:05:12.253 "driver_specific": {} 00:05:12.253 } 00:05:12.253 ]' 00:05:12.253 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.511 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.511 09:19:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.511 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.511 09:19:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.511 09:19:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.511 09:19:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.511 09:19:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.511 09:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.511 09:19:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.511 09:19:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.511 09:19:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.511 09:19:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.511 00:05:12.511 real 0m0.117s 00:05:12.511 user 0m0.079s 00:05:12.511 sys 0m0.007s 00:05:12.511 09:19:45 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.511 09:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.511 ************************************ 00:05:12.511 END TEST rpc_plugins 00:05:12.511 ************************************ 00:05:12.511 09:19:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:12.511 09:19:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.511 09:19:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.511 09:19:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.511 ************************************ 00:05:12.511 START TEST rpc_trace_cmd_test 00:05:12.511 ************************************ 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:12.511 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid395576", 00:05:12.511 "tpoint_group_mask": "0x8", 00:05:12.511 "iscsi_conn": { 00:05:12.511 "mask": "0x2", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "scsi": { 00:05:12.511 "mask": "0x4", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "bdev": { 00:05:12.511 "mask": "0x8", 00:05:12.511 "tpoint_mask": "0xffffffffffffffff" 00:05:12.511 }, 00:05:12.511 "nvmf_rdma": { 00:05:12.511 "mask": "0x10", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "nvmf_tcp": { 00:05:12.511 "mask": "0x20", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "ftl": { 00:05:12.511 "mask": "0x40", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "blobfs": { 00:05:12.511 "mask": "0x80", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "dsa": { 00:05:12.511 "mask": "0x200", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "thread": { 00:05:12.511 "mask": "0x400", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "nvme_pcie": { 00:05:12.511 "mask": "0x800", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "iaa": { 00:05:12.511 "mask": "0x1000", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "nvme_tcp": { 00:05:12.511 "mask": "0x2000", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "bdev_nvme": { 00:05:12.511 "mask": "0x4000", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 }, 00:05:12.511 "sock": { 00:05:12.511 "mask": "0x8000", 00:05:12.511 "tpoint_mask": "0x0" 00:05:12.511 } 00:05:12.511 }' 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:12.511 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:12.769 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:12.769 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:12.769 09:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:12.769 00:05:12.769 real 0m0.195s 00:05:12.769 user 0m0.177s 00:05:12.769 sys 0m0.012s 00:05:12.769 09:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.769 09:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.769 ************************************ 00:05:12.769 END TEST rpc_trace_cmd_test 00:05:12.769 ************************************ 00:05:12.769 09:19:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:12.769 09:19:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:12.769 09:19:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:12.769 09:19:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.769 09:19:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.769 09:19:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.769 ************************************ 00:05:12.769 START TEST rpc_daemon_integrity 00:05:12.769 ************************************ 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.769 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.769 { 00:05:12.769 "name": "Malloc2", 00:05:12.769 "aliases": [ 00:05:12.769 "cd1583bd-a998-4f79-a370-2829bf8ecd4c" 00:05:12.769 ], 00:05:12.769 "product_name": "Malloc disk", 00:05:12.769 "block_size": 512, 00:05:12.769 "num_blocks": 16384, 00:05:12.769 "uuid": "cd1583bd-a998-4f79-a370-2829bf8ecd4c", 00:05:12.769 "assigned_rate_limits": { 00:05:12.769 "rw_ios_per_sec": 0, 00:05:12.769 "rw_mbytes_per_sec": 0, 00:05:12.769 "r_mbytes_per_sec": 0, 00:05:12.769 "w_mbytes_per_sec": 0 00:05:12.769 }, 00:05:12.769 "claimed": false, 00:05:12.769 "zoned": false, 00:05:12.769 "supported_io_types": { 00:05:12.769 "read": true, 00:05:12.769 "write": true, 00:05:12.769 "unmap": true, 00:05:12.769 "flush": true, 00:05:12.769 "reset": true, 00:05:12.769 "nvme_admin": false, 00:05:12.769 "nvme_io": false, 00:05:12.769 "nvme_io_md": false, 00:05:12.769 "write_zeroes": true, 00:05:12.769 "zcopy": true, 00:05:12.769 "get_zone_info": false, 00:05:12.769 "zone_management": false, 00:05:12.769 "zone_append": false, 00:05:12.769 "compare": false, 00:05:12.769 "compare_and_write": false, 00:05:12.769 "abort": true, 00:05:12.770 "seek_hole": false, 00:05:12.770 "seek_data": false, 00:05:12.770 "copy": true, 00:05:12.770 "nvme_iov_md": false 00:05:12.770 }, 00:05:12.770 "memory_domains": [ 00:05:12.770 { 00:05:12.770 "dma_device_id": "system", 00:05:12.770 "dma_device_type": 1 00:05:12.770 }, 00:05:12.770 { 00:05:12.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.770 "dma_device_type": 2 00:05:12.770 } 00:05:12.770 ], 00:05:12.770 "driver_specific": {} 00:05:12.770 } 00:05:12.770 ]' 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.770 [2024-07-25 09:19:45.445015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.770 [2024-07-25 09:19:45.445058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.770 [2024-07-25 09:19:45.445086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd72980 00:05:12.770 [2024-07-25 09:19:45.445102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.770 [2024-07-25 09:19:45.446460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.770 [2024-07-25 09:19:45.446484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.770 Passthru0 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.770 { 00:05:12.770 "name": "Malloc2", 00:05:12.770 "aliases": [ 00:05:12.770 "cd1583bd-a998-4f79-a370-2829bf8ecd4c" 00:05:12.770 ], 00:05:12.770 "product_name": "Malloc disk", 00:05:12.770 "block_size": 512, 00:05:12.770 "num_blocks": 16384, 00:05:12.770 "uuid": "cd1583bd-a998-4f79-a370-2829bf8ecd4c", 00:05:12.770 "assigned_rate_limits": { 00:05:12.770 "rw_ios_per_sec": 0, 00:05:12.770 "rw_mbytes_per_sec": 0, 00:05:12.770 "r_mbytes_per_sec": 0, 00:05:12.770 "w_mbytes_per_sec": 0 00:05:12.770 }, 00:05:12.770 "claimed": true, 00:05:12.770 "claim_type": "exclusive_write", 00:05:12.770 "zoned": false, 00:05:12.770 "supported_io_types": { 00:05:12.770 "read": true, 00:05:12.770 "write": true, 00:05:12.770 "unmap": true, 00:05:12.770 "flush": true, 00:05:12.770 "reset": true, 00:05:12.770 "nvme_admin": false, 00:05:12.770 "nvme_io": false, 00:05:12.770 "nvme_io_md": false, 00:05:12.770 "write_zeroes": true, 00:05:12.770 "zcopy": true, 00:05:12.770 "get_zone_info": false, 00:05:12.770 "zone_management": false, 00:05:12.770 "zone_append": false, 00:05:12.770 "compare": false, 00:05:12.770 "compare_and_write": false, 00:05:12.770 "abort": true, 00:05:12.770 "seek_hole": false, 00:05:12.770 "seek_data": false, 00:05:12.770 "copy": true, 00:05:12.770 "nvme_iov_md": false 00:05:12.770 }, 00:05:12.770 "memory_domains": [ 00:05:12.770 { 00:05:12.770 "dma_device_id": "system", 00:05:12.770 "dma_device_type": 1 00:05:12.770 }, 00:05:12.770 { 00:05:12.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.770 "dma_device_type": 2 00:05:12.770 } 00:05:12.770 ], 00:05:12.770 "driver_specific": {} 00:05:12.770 }, 00:05:12.770 { 00:05:12.770 "name": "Passthru0", 00:05:12.770 "aliases": [ 00:05:12.770 "4f1a6716-a8f7-5912-b6d3-7693559a1681" 00:05:12.770 ], 00:05:12.770 "product_name": "passthru", 00:05:12.770 "block_size": 512, 00:05:12.770 "num_blocks": 16384, 00:05:12.770 "uuid": "4f1a6716-a8f7-5912-b6d3-7693559a1681", 00:05:12.770 "assigned_rate_limits": { 00:05:12.770 "rw_ios_per_sec": 0, 00:05:12.770 "rw_mbytes_per_sec": 0, 00:05:12.770 "r_mbytes_per_sec": 0, 00:05:12.770 "w_mbytes_per_sec": 0 00:05:12.770 }, 00:05:12.770 "claimed": false, 00:05:12.770 "zoned": false, 00:05:12.770 "supported_io_types": { 00:05:12.770 "read": true, 00:05:12.770 "write": true, 00:05:12.770 "unmap": true, 00:05:12.770 "flush": true, 00:05:12.770 "reset": true, 00:05:12.770 "nvme_admin": false, 00:05:12.770 "nvme_io": false, 00:05:12.770 "nvme_io_md": false, 00:05:12.770 "write_zeroes": true, 00:05:12.770 "zcopy": true, 00:05:12.770 "get_zone_info": false, 00:05:12.770 "zone_management": false, 00:05:12.770 "zone_append": false, 00:05:12.770 "compare": false, 00:05:12.770 "compare_and_write": false, 00:05:12.770 "abort": true, 00:05:12.770 "seek_hole": false, 00:05:12.770 "seek_data": false, 00:05:12.770 "copy": true, 00:05:12.770 "nvme_iov_md": false 00:05:12.770 }, 00:05:12.770 "memory_domains": [ 00:05:12.770 { 00:05:12.770 "dma_device_id": "system", 00:05:12.770 "dma_device_type": 1 00:05:12.770 }, 00:05:12.770 { 00:05:12.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.770 "dma_device_type": 2 00:05:12.770 } 00:05:12.770 ], 00:05:12.770 "driver_specific": { 00:05:12.770 "passthru": { 00:05:12.770 "name": "Passthru0", 00:05:12.770 "base_bdev_name": "Malloc2" 00:05:12.770 } 00:05:12.770 } 00:05:12.770 } 00:05:12.770 ]' 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.770 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.028 00:05:13.028 real 0m0.228s 00:05:13.028 user 0m0.155s 00:05:13.028 sys 0m0.019s 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.028 09:19:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.028 ************************************ 00:05:13.028 END TEST rpc_daemon_integrity 00:05:13.028 ************************************ 00:05:13.028 09:19:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.028 09:19:45 rpc -- rpc/rpc.sh@84 -- # killprocess 395576 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@948 -- # '[' -z 395576 ']' 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@952 -- # kill -0 395576 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@953 -- # uname 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 395576 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 395576' 00:05:13.028 killing process with pid 395576 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@967 -- # kill 395576 00:05:13.028 09:19:45 rpc -- common/autotest_common.sh@972 -- # wait 395576 00:05:13.593 00:05:13.593 real 0m1.976s 00:05:13.593 user 0m2.506s 00:05:13.593 sys 0m0.578s 00:05:13.593 09:19:46 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.593 09:19:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.593 ************************************ 00:05:13.593 END TEST rpc 00:05:13.593 ************************************ 00:05:13.593 09:19:46 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.593 09:19:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.593 09:19:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.593 09:19:46 -- common/autotest_common.sh@10 -- # set +x 00:05:13.593 ************************************ 00:05:13.593 START TEST skip_rpc 00:05:13.593 ************************************ 00:05:13.593 09:19:46 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.593 * Looking for test storage... 00:05:13.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.593 09:19:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.593 09:19:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.593 09:19:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.593 09:19:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.593 09:19:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.593 09:19:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.593 ************************************ 00:05:13.593 START TEST skip_rpc 00:05:13.593 ************************************ 00:05:13.593 09:19:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:13.593 09:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=396011 00:05:13.593 09:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.593 09:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.593 09:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.593 [2024-07-25 09:19:46.233840] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:13.593 [2024-07-25 09:19:46.233914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396011 ] 00:05:13.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.593 [2024-07-25 09:19:46.292331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.851 [2024-07-25 09:19:46.409730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 396011 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 396011 ']' 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 396011 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 396011 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 396011' 00:05:19.112 killing process with pid 396011 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 396011 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 396011 00:05:19.112 00:05:19.112 real 0m5.482s 00:05:19.112 user 0m5.158s 00:05:19.112 sys 0m0.333s 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.112 09:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.112 ************************************ 00:05:19.112 END TEST skip_rpc 00:05:19.112 ************************************ 00:05:19.112 09:19:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.112 09:19:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.112 09:19:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.112 09:19:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.112 ************************************ 00:05:19.112 START TEST skip_rpc_with_json 00:05:19.112 ************************************ 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=396698 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 396698 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 396698 ']' 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.112 09:19:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.112 [2024-07-25 09:19:51.769925] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:19.112 [2024-07-25 09:19:51.770021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396698 ] 00:05:19.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.112 [2024-07-25 09:19:51.831549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.371 [2024-07-25 09:19:51.945822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.305 [2024-07-25 09:19:52.707970] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.305 request: 00:05:20.305 { 00:05:20.305 "trtype": "tcp", 00:05:20.305 "method": "nvmf_get_transports", 00:05:20.305 "req_id": 1 00:05:20.305 } 00:05:20.305 Got JSON-RPC error response 00:05:20.305 response: 00:05:20.305 { 00:05:20.305 "code": -19, 00:05:20.305 "message": "No such device" 00:05:20.305 } 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.305 [2024-07-25 09:19:52.716091] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.305 09:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.305 { 00:05:20.305 "subsystems": [ 00:05:20.305 { 00:05:20.305 "subsystem": "vfio_user_target", 00:05:20.305 "config": null 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "subsystem": "keyring", 00:05:20.305 "config": [] 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "subsystem": "iobuf", 00:05:20.305 "config": [ 00:05:20.305 { 00:05:20.305 "method": "iobuf_set_options", 00:05:20.305 "params": { 00:05:20.305 "small_pool_count": 8192, 00:05:20.305 "large_pool_count": 1024, 00:05:20.305 "small_bufsize": 8192, 00:05:20.305 "large_bufsize": 135168 00:05:20.305 } 00:05:20.305 } 00:05:20.305 ] 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "subsystem": "sock", 00:05:20.305 "config": [ 00:05:20.305 { 00:05:20.305 "method": "sock_set_default_impl", 00:05:20.305 "params": { 00:05:20.305 "impl_name": "posix" 00:05:20.305 } 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "method": "sock_impl_set_options", 00:05:20.305 "params": { 00:05:20.305 "impl_name": "ssl", 00:05:20.305 "recv_buf_size": 4096, 00:05:20.305 "send_buf_size": 4096, 00:05:20.305 "enable_recv_pipe": true, 00:05:20.305 "enable_quickack": false, 00:05:20.305 "enable_placement_id": 0, 00:05:20.305 "enable_zerocopy_send_server": true, 00:05:20.305 "enable_zerocopy_send_client": false, 00:05:20.305 "zerocopy_threshold": 0, 00:05:20.305 "tls_version": 0, 00:05:20.305 "enable_ktls": false 00:05:20.305 } 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "method": "sock_impl_set_options", 00:05:20.305 "params": { 00:05:20.305 "impl_name": "posix", 00:05:20.305 "recv_buf_size": 2097152, 00:05:20.305 "send_buf_size": 2097152, 00:05:20.305 "enable_recv_pipe": true, 00:05:20.305 "enable_quickack": false, 00:05:20.305 "enable_placement_id": 0, 00:05:20.305 "enable_zerocopy_send_server": true, 00:05:20.305 "enable_zerocopy_send_client": false, 00:05:20.305 "zerocopy_threshold": 0, 00:05:20.305 "tls_version": 0, 00:05:20.305 "enable_ktls": false 00:05:20.305 } 00:05:20.305 } 00:05:20.305 ] 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "subsystem": "vmd", 00:05:20.305 "config": [] 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "subsystem": "accel", 00:05:20.305 "config": [ 00:05:20.305 { 00:05:20.305 "method": "accel_set_options", 00:05:20.305 "params": { 00:05:20.305 "small_cache_size": 128, 00:05:20.305 "large_cache_size": 16, 00:05:20.305 "task_count": 2048, 00:05:20.305 "sequence_count": 2048, 00:05:20.305 "buf_count": 2048 00:05:20.305 } 00:05:20.305 } 00:05:20.305 ] 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "subsystem": "bdev", 00:05:20.305 "config": [ 00:05:20.305 { 00:05:20.305 "method": "bdev_set_options", 00:05:20.305 "params": { 00:05:20.305 "bdev_io_pool_size": 65535, 00:05:20.305 "bdev_io_cache_size": 256, 00:05:20.305 "bdev_auto_examine": true, 00:05:20.305 "iobuf_small_cache_size": 128, 00:05:20.305 "iobuf_large_cache_size": 16 00:05:20.305 } 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "method": "bdev_raid_set_options", 00:05:20.305 "params": { 00:05:20.305 "process_window_size_kb": 1024, 00:05:20.305 "process_max_bandwidth_mb_sec": 0 00:05:20.305 } 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "method": "bdev_iscsi_set_options", 00:05:20.305 "params": { 00:05:20.305 "timeout_sec": 30 00:05:20.305 } 00:05:20.305 }, 00:05:20.305 { 00:05:20.305 "method": "bdev_nvme_set_options", 00:05:20.305 "params": { 00:05:20.305 "action_on_timeout": "none", 00:05:20.305 "timeout_us": 0, 00:05:20.305 "timeout_admin_us": 0, 00:05:20.305 "keep_alive_timeout_ms": 10000, 00:05:20.305 "arbitration_burst": 0, 00:05:20.305 "low_priority_weight": 0, 00:05:20.305 "medium_priority_weight": 0, 00:05:20.305 "high_priority_weight": 0, 00:05:20.305 "nvme_adminq_poll_period_us": 10000, 00:05:20.305 "nvme_ioq_poll_period_us": 0, 00:05:20.305 "io_queue_requests": 0, 00:05:20.305 "delay_cmd_submit": true, 00:05:20.305 "transport_retry_count": 4, 00:05:20.305 "bdev_retry_count": 3, 00:05:20.305 "transport_ack_timeout": 0, 00:05:20.305 "ctrlr_loss_timeout_sec": 0, 00:05:20.305 "reconnect_delay_sec": 0, 00:05:20.305 "fast_io_fail_timeout_sec": 0, 00:05:20.305 "disable_auto_failback": false, 00:05:20.305 "generate_uuids": false, 00:05:20.305 "transport_tos": 0, 00:05:20.305 "nvme_error_stat": false, 00:05:20.305 "rdma_srq_size": 0, 00:05:20.305 "io_path_stat": false, 00:05:20.305 "allow_accel_sequence": false, 00:05:20.305 "rdma_max_cq_size": 0, 00:05:20.305 "rdma_cm_event_timeout_ms": 0, 00:05:20.305 "dhchap_digests": [ 00:05:20.305 "sha256", 00:05:20.305 "sha384", 00:05:20.305 "sha512" 00:05:20.305 ], 00:05:20.305 "dhchap_dhgroups": [ 00:05:20.305 "null", 00:05:20.305 "ffdhe2048", 00:05:20.305 "ffdhe3072", 00:05:20.305 "ffdhe4096", 00:05:20.306 "ffdhe6144", 00:05:20.306 "ffdhe8192" 00:05:20.306 ] 00:05:20.306 } 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "method": "bdev_nvme_set_hotplug", 00:05:20.306 "params": { 00:05:20.306 "period_us": 100000, 00:05:20.306 "enable": false 00:05:20.306 } 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "method": "bdev_wait_for_examine" 00:05:20.306 } 00:05:20.306 ] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "scsi", 00:05:20.306 "config": null 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "scheduler", 00:05:20.306 "config": [ 00:05:20.306 { 00:05:20.306 "method": "framework_set_scheduler", 00:05:20.306 "params": { 00:05:20.306 "name": "static" 00:05:20.306 } 00:05:20.306 } 00:05:20.306 ] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "vhost_scsi", 00:05:20.306 "config": [] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "vhost_blk", 00:05:20.306 "config": [] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "ublk", 00:05:20.306 "config": [] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "nbd", 00:05:20.306 "config": [] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "nvmf", 00:05:20.306 "config": [ 00:05:20.306 { 00:05:20.306 "method": "nvmf_set_config", 00:05:20.306 "params": { 00:05:20.306 "discovery_filter": "match_any", 00:05:20.306 "admin_cmd_passthru": { 00:05:20.306 "identify_ctrlr": false 00:05:20.306 } 00:05:20.306 } 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "method": "nvmf_set_max_subsystems", 00:05:20.306 "params": { 00:05:20.306 "max_subsystems": 1024 00:05:20.306 } 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "method": "nvmf_set_crdt", 00:05:20.306 "params": { 00:05:20.306 "crdt1": 0, 00:05:20.306 "crdt2": 0, 00:05:20.306 "crdt3": 0 00:05:20.306 } 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "method": "nvmf_create_transport", 00:05:20.306 "params": { 00:05:20.306 "trtype": "TCP", 00:05:20.306 "max_queue_depth": 128, 00:05:20.306 "max_io_qpairs_per_ctrlr": 127, 00:05:20.306 "in_capsule_data_size": 4096, 00:05:20.306 "max_io_size": 131072, 00:05:20.306 "io_unit_size": 131072, 00:05:20.306 "max_aq_depth": 128, 00:05:20.306 "num_shared_buffers": 511, 00:05:20.306 "buf_cache_size": 4294967295, 00:05:20.306 "dif_insert_or_strip": false, 00:05:20.306 "zcopy": false, 00:05:20.306 "c2h_success": true, 00:05:20.306 "sock_priority": 0, 00:05:20.306 "abort_timeout_sec": 1, 00:05:20.306 "ack_timeout": 0, 00:05:20.306 "data_wr_pool_size": 0 00:05:20.306 } 00:05:20.306 } 00:05:20.306 ] 00:05:20.306 }, 00:05:20.306 { 00:05:20.306 "subsystem": "iscsi", 00:05:20.306 "config": [ 00:05:20.306 { 00:05:20.306 "method": "iscsi_set_options", 00:05:20.306 "params": { 00:05:20.306 "node_base": "iqn.2016-06.io.spdk", 00:05:20.306 "max_sessions": 128, 00:05:20.306 "max_connections_per_session": 2, 00:05:20.306 "max_queue_depth": 64, 00:05:20.306 "default_time2wait": 2, 00:05:20.306 "default_time2retain": 20, 00:05:20.306 "first_burst_length": 8192, 00:05:20.306 "immediate_data": true, 00:05:20.306 "allow_duplicated_isid": false, 00:05:20.306 "error_recovery_level": 0, 00:05:20.306 "nop_timeout": 60, 00:05:20.306 "nop_in_interval": 30, 00:05:20.306 "disable_chap": false, 00:05:20.306 "require_chap": false, 00:05:20.306 "mutual_chap": false, 00:05:20.306 "chap_group": 0, 00:05:20.306 "max_large_datain_per_connection": 64, 00:05:20.306 "max_r2t_per_connection": 4, 00:05:20.306 "pdu_pool_size": 36864, 00:05:20.306 "immediate_data_pool_size": 16384, 00:05:20.306 "data_out_pool_size": 2048 00:05:20.306 } 00:05:20.306 } 00:05:20.306 ] 00:05:20.306 } 00:05:20.306 ] 00:05:20.306 } 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 396698 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 396698 ']' 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 396698 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 396698 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 396698' 00:05:20.306 killing process with pid 396698 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 396698 00:05:20.306 09:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 396698 00:05:20.872 09:19:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=396851 00:05:20.872 09:19:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.872 09:19:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 396851 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 396851 ']' 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 396851 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 396851 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 396851' 00:05:26.133 killing process with pid 396851 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 396851 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 396851 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.133 00:05:26.133 real 0m7.130s 00:05:26.133 user 0m6.915s 00:05:26.133 sys 0m0.725s 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.133 09:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.133 ************************************ 00:05:26.133 END TEST skip_rpc_with_json 00:05:26.133 ************************************ 00:05:26.392 09:19:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.392 09:19:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.392 09:19:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.392 09:19:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.392 ************************************ 00:05:26.392 START TEST skip_rpc_with_delay 00:05:26.392 ************************************ 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.392 [2024-07-25 09:19:58.948316] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.392 [2024-07-25 09:19:58.948457] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.392 00:05:26.392 real 0m0.069s 00:05:26.392 user 0m0.043s 00:05:26.392 sys 0m0.026s 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.392 09:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.392 ************************************ 00:05:26.392 END TEST skip_rpc_with_delay 00:05:26.392 ************************************ 00:05:26.392 09:19:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.392 09:19:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.392 09:19:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.392 09:19:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.392 09:19:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.392 09:19:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.392 ************************************ 00:05:26.392 START TEST exit_on_failed_rpc_init 00:05:26.392 ************************************ 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=397567 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 397567 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 397567 ']' 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.392 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.392 [2024-07-25 09:19:59.064879] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:26.392 [2024-07-25 09:19:59.064962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397567 ] 00:05:26.392 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.392 [2024-07-25 09:19:59.122290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.651 [2024-07-25 09:19:59.237482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.584 09:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.584 [2024-07-25 09:20:00.047826] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:27.584 [2024-07-25 09:20:00.047921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397697 ] 00:05:27.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.584 [2024-07-25 09:20:00.112398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.585 [2024-07-25 09:20:00.230722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.585 [2024-07-25 09:20:00.230825] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.585 [2024-07-25 09:20:00.230846] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.585 [2024-07-25 09:20:00.230860] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.842 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 397567 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 397567 ']' 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 397567 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397567 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397567' 00:05:27.843 killing process with pid 397567 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 397567 00:05:27.843 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 397567 00:05:28.410 00:05:28.410 real 0m1.826s 00:05:28.410 user 0m2.185s 00:05:28.410 sys 0m0.491s 00:05:28.410 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.410 09:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 END TEST exit_on_failed_rpc_init 00:05:28.410 ************************************ 00:05:28.410 09:20:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.410 00:05:28.410 real 0m14.756s 00:05:28.410 user 0m14.402s 00:05:28.410 sys 0m1.737s 00:05:28.410 09:20:00 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.410 09:20:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 END TEST skip_rpc 00:05:28.410 ************************************ 00:05:28.410 09:20:00 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.410 09:20:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.410 09:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.410 09:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 START TEST rpc_client 00:05:28.410 ************************************ 00:05:28.410 09:20:00 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.410 * Looking for test storage... 00:05:28.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:28.410 09:20:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:28.410 OK 00:05:28.410 09:20:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.410 00:05:28.410 real 0m0.071s 00:05:28.410 user 0m0.025s 00:05:28.410 sys 0m0.051s 00:05:28.410 09:20:00 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.410 09:20:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 END TEST rpc_client 00:05:28.410 ************************************ 00:05:28.410 09:20:01 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.410 09:20:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.410 09:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.410 09:20:01 -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 START TEST json_config 00:05:28.410 ************************************ 00:05:28.410 09:20:01 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.410 09:20:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.410 09:20:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.410 09:20:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.410 09:20:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.410 09:20:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.410 09:20:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.410 09:20:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.410 09:20:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.410 09:20:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@47 -- # : 0 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.410 09:20:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.410 09:20:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.410 09:20:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:28.411 INFO: JSON configuration test init 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.411 09:20:01 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.411 09:20:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.411 09:20:01 json_config -- json_config/common.sh@10 -- # shift 00:05:28.411 09:20:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.411 09:20:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.411 09:20:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.411 09:20:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.411 09:20:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.411 09:20:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=397947 00:05:28.411 09:20:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.411 09:20:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.411 Waiting for target to run... 00:05:28.411 09:20:01 json_config -- json_config/common.sh@25 -- # waitforlisten 397947 /var/tmp/spdk_tgt.sock 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@829 -- # '[' -z 397947 ']' 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.411 09:20:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.411 [2024-07-25 09:20:01.132241] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:28.411 [2024-07-25 09:20:01.132337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397947 ] 00:05:28.669 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.926 [2024-07-25 09:20:01.491167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.926 [2024-07-25 09:20:01.580010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.491 09:20:02 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.491 09:20:02 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:29.491 09:20:02 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.491 00:05:29.491 09:20:02 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:29.491 09:20:02 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:29.491 09:20:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.491 09:20:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.491 09:20:02 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:29.491 09:20:02 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:29.491 09:20:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.491 09:20:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.491 09:20:02 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.491 09:20:02 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:29.491 09:20:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.770 09:20:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.770 09:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:32.770 09:20:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:32.770 09:20:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@51 -- # sort 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:33.029 09:20:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.029 09:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:33.029 09:20:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.029 09:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:33.029 09:20:05 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.029 09:20:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.287 MallocForNvmf0 00:05:33.287 09:20:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.287 09:20:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.544 MallocForNvmf1 00:05:33.544 09:20:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.544 09:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.802 [2024-07-25 09:20:06.298498] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.802 09:20:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.802 09:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.059 09:20:06 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.060 09:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.318 09:20:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.318 09:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.576 09:20:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.576 09:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.576 [2024-07-25 09:20:07.285728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.576 09:20:07 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:34.576 09:20:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.576 09:20:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.833 09:20:07 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:34.833 09:20:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.833 09:20:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.833 09:20:07 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:34.833 09:20:07 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.833 09:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.091 MallocBdevForConfigChangeCheck 00:05:35.091 09:20:07 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:35.091 09:20:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.091 09:20:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.091 09:20:07 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:35.091 09:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.348 09:20:07 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:35.348 INFO: shutting down applications... 00:05:35.348 09:20:07 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:35.348 09:20:07 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:35.348 09:20:07 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:35.348 09:20:07 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:37.876 Calling clear_iscsi_subsystem 00:05:37.876 Calling clear_nvmf_subsystem 00:05:37.876 Calling clear_nbd_subsystem 00:05:37.876 Calling clear_ublk_subsystem 00:05:37.876 Calling clear_vhost_blk_subsystem 00:05:37.876 Calling clear_vhost_scsi_subsystem 00:05:37.876 Calling clear_bdev_subsystem 00:05:37.876 09:20:10 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:37.876 09:20:10 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:37.876 09:20:10 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:37.876 09:20:10 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.876 09:20:10 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:37.876 09:20:10 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:38.134 09:20:10 json_config -- json_config/json_config.sh@349 -- # break 00:05:38.134 09:20:10 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:38.134 09:20:10 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:38.134 09:20:10 json_config -- json_config/common.sh@31 -- # local app=target 00:05:38.134 09:20:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.134 09:20:10 json_config -- json_config/common.sh@35 -- # [[ -n 397947 ]] 00:05:38.134 09:20:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 397947 00:05:38.134 09:20:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.134 09:20:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.134 09:20:10 json_config -- json_config/common.sh@41 -- # kill -0 397947 00:05:38.134 09:20:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.700 09:20:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.700 09:20:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.700 09:20:11 json_config -- json_config/common.sh@41 -- # kill -0 397947 00:05:38.700 09:20:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.700 09:20:11 json_config -- json_config/common.sh@43 -- # break 00:05:38.700 09:20:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.700 09:20:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.700 SPDK target shutdown done 00:05:38.700 09:20:11 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:38.700 INFO: relaunching applications... 00:05:38.700 09:20:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.700 09:20:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:38.700 09:20:11 json_config -- json_config/common.sh@10 -- # shift 00:05:38.700 09:20:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.700 09:20:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.700 09:20:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.700 09:20:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.700 09:20:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.700 09:20:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=399274 00:05:38.700 09:20:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.700 09:20:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.700 Waiting for target to run... 00:05:38.700 09:20:11 json_config -- json_config/common.sh@25 -- # waitforlisten 399274 /var/tmp/spdk_tgt.sock 00:05:38.700 09:20:11 json_config -- common/autotest_common.sh@829 -- # '[' -z 399274 ']' 00:05:38.700 09:20:11 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.700 09:20:11 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.700 09:20:11 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.700 09:20:11 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.700 09:20:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 [2024-07-25 09:20:11.331816] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:38.700 [2024-07-25 09:20:11.331908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399274 ] 00:05:38.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.958 [2024-07-25 09:20:11.686636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.217 [2024-07-25 09:20:11.775980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.498 [2024-07-25 09:20:14.815812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.498 [2024-07-25 09:20:14.848259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:42.498 09:20:14 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.498 09:20:14 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:42.498 09:20:14 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.498 00:05:42.498 09:20:14 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:42.498 09:20:14 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:42.498 INFO: Checking if target configuration is the same... 00:05:42.498 09:20:14 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.498 09:20:14 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:42.498 09:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.498 + '[' 2 -ne 2 ']' 00:05:42.498 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.498 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.498 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.498 +++ basename /dev/fd/62 00:05:42.498 ++ mktemp /tmp/62.XXX 00:05:42.498 + tmp_file_1=/tmp/62.PKL 00:05:42.498 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.498 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.498 + tmp_file_2=/tmp/spdk_tgt_config.json.Xoc 00:05:42.498 + ret=0 00:05:42.498 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.755 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.755 + diff -u /tmp/62.PKL /tmp/spdk_tgt_config.json.Xoc 00:05:42.755 + echo 'INFO: JSON config files are the same' 00:05:42.755 INFO: JSON config files are the same 00:05:42.755 + rm /tmp/62.PKL /tmp/spdk_tgt_config.json.Xoc 00:05:42.755 + exit 0 00:05:42.755 09:20:15 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:42.755 09:20:15 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:42.755 INFO: changing configuration and checking if this can be detected... 00:05:42.755 09:20:15 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.755 09:20:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.013 09:20:15 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.013 09:20:15 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:43.013 09:20:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.013 + '[' 2 -ne 2 ']' 00:05:43.013 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.013 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:43.013 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.013 +++ basename /dev/fd/62 00:05:43.013 ++ mktemp /tmp/62.XXX 00:05:43.013 + tmp_file_1=/tmp/62.hrr 00:05:43.013 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.013 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.013 + tmp_file_2=/tmp/spdk_tgt_config.json.dNx 00:05:43.013 + ret=0 00:05:43.013 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.271 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.271 + diff -u /tmp/62.hrr /tmp/spdk_tgt_config.json.dNx 00:05:43.271 + ret=1 00:05:43.271 + echo '=== Start of file: /tmp/62.hrr ===' 00:05:43.271 + cat /tmp/62.hrr 00:05:43.271 + echo '=== End of file: /tmp/62.hrr ===' 00:05:43.271 + echo '' 00:05:43.271 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dNx ===' 00:05:43.271 + cat /tmp/spdk_tgt_config.json.dNx 00:05:43.271 + echo '=== End of file: /tmp/spdk_tgt_config.json.dNx ===' 00:05:43.271 + echo '' 00:05:43.271 + rm /tmp/62.hrr /tmp/spdk_tgt_config.json.dNx 00:05:43.271 + exit 1 00:05:43.271 09:20:16 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:43.271 INFO: configuration change detected. 00:05:43.271 09:20:16 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:43.271 09:20:16 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:43.271 09:20:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.271 09:20:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@321 -- # [[ -n 399274 ]] 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:43.529 09:20:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.529 09:20:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:43.529 09:20:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.529 09:20:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.529 09:20:16 json_config -- json_config/json_config.sh@327 -- # killprocess 399274 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@948 -- # '[' -z 399274 ']' 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@952 -- # kill -0 399274 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@953 -- # uname 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 399274 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 399274' 00:05:43.530 killing process with pid 399274 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@967 -- # kill 399274 00:05:43.530 09:20:16 json_config -- common/autotest_common.sh@972 -- # wait 399274 00:05:46.058 09:20:18 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.058 09:20:18 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:46.058 09:20:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.058 09:20:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.058 09:20:18 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:46.058 09:20:18 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:46.058 INFO: Success 00:05:46.058 00:05:46.058 real 0m17.607s 00:05:46.058 user 0m19.662s 00:05:46.058 sys 0m1.853s 00:05:46.058 09:20:18 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.058 09:20:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.058 ************************************ 00:05:46.058 END TEST json_config 00:05:46.058 ************************************ 00:05:46.058 09:20:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.058 09:20:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.058 09:20:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.058 09:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:46.058 ************************************ 00:05:46.058 START TEST json_config_extra_key 00:05:46.058 ************************************ 00:05:46.058 09:20:18 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.058 09:20:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.058 09:20:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.058 09:20:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.058 09:20:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.058 09:20:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.058 09:20:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.058 09:20:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.058 09:20:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.058 09:20:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.058 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.058 INFO: launching applications... 00:05:46.059 09:20:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=400310 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.059 Waiting for target to run... 00:05:46.059 09:20:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 400310 /var/tmp/spdk_tgt.sock 00:05:46.059 09:20:18 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 400310 ']' 00:05:46.059 09:20:18 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.059 09:20:18 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.059 09:20:18 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.059 09:20:18 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.059 09:20:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.059 [2024-07-25 09:20:18.784412] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:46.059 [2024-07-25 09:20:18.784496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400310 ] 00:05:46.317 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.575 [2024-07-25 09:20:19.255623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.833 [2024-07-25 09:20:19.362420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.090 09:20:19 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.090 09:20:19 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.090 00:05:47.090 09:20:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.090 INFO: shutting down applications... 00:05:47.090 09:20:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 400310 ]] 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 400310 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 400310 00:05:47.090 09:20:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.655 09:20:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.655 09:20:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.655 09:20:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 400310 00:05:47.655 09:20:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.655 09:20:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:47.656 09:20:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.656 09:20:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.656 SPDK target shutdown done 00:05:47.656 09:20:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.656 Success 00:05:47.656 00:05:47.656 real 0m1.558s 00:05:47.656 user 0m1.439s 00:05:47.656 sys 0m0.570s 00:05:47.656 09:20:20 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.656 09:20:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 ************************************ 00:05:47.656 END TEST json_config_extra_key 00:05:47.656 ************************************ 00:05:47.656 09:20:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.656 09:20:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.656 09:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.656 09:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 ************************************ 00:05:47.656 START TEST alias_rpc 00:05:47.656 ************************************ 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.656 * Looking for test storage... 00:05:47.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:47.656 09:20:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.656 09:20:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=400502 00:05:47.656 09:20:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.656 09:20:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 400502 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 400502 ']' 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.656 09:20:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 [2024-07-25 09:20:20.387293] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:47.656 [2024-07-25 09:20:20.387407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400502 ] 00:05:47.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.914 [2024-07-25 09:20:20.446634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.914 [2024-07-25 09:20:20.558839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.847 09:20:21 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.847 09:20:21 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.847 09:20:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:48.847 09:20:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 400502 00:05:48.847 09:20:21 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 400502 ']' 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 400502 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 400502 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 400502' 00:05:49.107 killing process with pid 400502 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@967 -- # kill 400502 00:05:49.107 09:20:21 alias_rpc -- common/autotest_common.sh@972 -- # wait 400502 00:05:49.443 00:05:49.443 real 0m1.780s 00:05:49.443 user 0m2.045s 00:05:49.443 sys 0m0.454s 00:05:49.443 09:20:22 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.443 09:20:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 ************************************ 00:05:49.443 END TEST alias_rpc 00:05:49.443 ************************************ 00:05:49.443 09:20:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:49.443 09:20:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.443 09:20:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.443 09:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.443 09:20:22 -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 ************************************ 00:05:49.443 START TEST spdkcli_tcp 00:05:49.443 ************************************ 00:05:49.443 09:20:22 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.443 * Looking for test storage... 00:05:49.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.443 09:20:22 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.443 09:20:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=400812 00:05:49.444 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.444 09:20:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 400812 00:05:49.444 09:20:22 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 400812 ']' 00:05:49.444 09:20:22 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.444 09:20:22 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.444 09:20:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.444 09:20:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.444 09:20:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.701 [2024-07-25 09:20:22.212405] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:49.701 [2024-07-25 09:20:22.212486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400812 ] 00:05:49.701 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.701 [2024-07-25 09:20:22.273330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.701 [2024-07-25 09:20:22.392218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.701 [2024-07-25 09:20:22.392222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.632 09:20:23 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.632 09:20:23 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:50.632 09:20:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=400834 00:05:50.632 09:20:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.632 09:20:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.889 [ 00:05:50.889 "bdev_malloc_delete", 00:05:50.889 "bdev_malloc_create", 00:05:50.889 "bdev_null_resize", 00:05:50.889 "bdev_null_delete", 00:05:50.889 "bdev_null_create", 00:05:50.889 "bdev_nvme_cuse_unregister", 00:05:50.889 "bdev_nvme_cuse_register", 00:05:50.889 "bdev_opal_new_user", 00:05:50.889 "bdev_opal_set_lock_state", 00:05:50.890 "bdev_opal_delete", 00:05:50.890 "bdev_opal_get_info", 00:05:50.890 "bdev_opal_create", 00:05:50.890 "bdev_nvme_opal_revert", 00:05:50.890 "bdev_nvme_opal_init", 00:05:50.890 "bdev_nvme_send_cmd", 00:05:50.890 "bdev_nvme_get_path_iostat", 00:05:50.890 "bdev_nvme_get_mdns_discovery_info", 00:05:50.890 "bdev_nvme_stop_mdns_discovery", 00:05:50.890 "bdev_nvme_start_mdns_discovery", 00:05:50.890 "bdev_nvme_set_multipath_policy", 00:05:50.890 "bdev_nvme_set_preferred_path", 00:05:50.890 "bdev_nvme_get_io_paths", 00:05:50.890 "bdev_nvme_remove_error_injection", 00:05:50.890 "bdev_nvme_add_error_injection", 00:05:50.890 "bdev_nvme_get_discovery_info", 00:05:50.890 "bdev_nvme_stop_discovery", 00:05:50.890 "bdev_nvme_start_discovery", 00:05:50.890 "bdev_nvme_get_controller_health_info", 00:05:50.890 "bdev_nvme_disable_controller", 00:05:50.890 "bdev_nvme_enable_controller", 00:05:50.890 "bdev_nvme_reset_controller", 00:05:50.890 "bdev_nvme_get_transport_statistics", 00:05:50.890 "bdev_nvme_apply_firmware", 00:05:50.890 "bdev_nvme_detach_controller", 00:05:50.890 "bdev_nvme_get_controllers", 00:05:50.890 "bdev_nvme_attach_controller", 00:05:50.890 "bdev_nvme_set_hotplug", 00:05:50.890 "bdev_nvme_set_options", 00:05:50.890 "bdev_passthru_delete", 00:05:50.890 "bdev_passthru_create", 00:05:50.890 "bdev_lvol_set_parent_bdev", 00:05:50.890 "bdev_lvol_set_parent", 00:05:50.890 "bdev_lvol_check_shallow_copy", 00:05:50.890 "bdev_lvol_start_shallow_copy", 00:05:50.890 "bdev_lvol_grow_lvstore", 00:05:50.890 "bdev_lvol_get_lvols", 00:05:50.890 "bdev_lvol_get_lvstores", 00:05:50.890 "bdev_lvol_delete", 00:05:50.890 "bdev_lvol_set_read_only", 00:05:50.890 "bdev_lvol_resize", 00:05:50.890 "bdev_lvol_decouple_parent", 00:05:50.890 "bdev_lvol_inflate", 00:05:50.890 "bdev_lvol_rename", 00:05:50.890 "bdev_lvol_clone_bdev", 00:05:50.890 "bdev_lvol_clone", 00:05:50.890 "bdev_lvol_snapshot", 00:05:50.890 "bdev_lvol_create", 00:05:50.890 "bdev_lvol_delete_lvstore", 00:05:50.890 "bdev_lvol_rename_lvstore", 00:05:50.890 "bdev_lvol_create_lvstore", 00:05:50.890 "bdev_raid_set_options", 00:05:50.890 "bdev_raid_remove_base_bdev", 00:05:50.890 "bdev_raid_add_base_bdev", 00:05:50.890 "bdev_raid_delete", 00:05:50.890 "bdev_raid_create", 00:05:50.890 "bdev_raid_get_bdevs", 00:05:50.890 "bdev_error_inject_error", 00:05:50.890 "bdev_error_delete", 00:05:50.890 "bdev_error_create", 00:05:50.890 "bdev_split_delete", 00:05:50.890 "bdev_split_create", 00:05:50.890 "bdev_delay_delete", 00:05:50.890 "bdev_delay_create", 00:05:50.890 "bdev_delay_update_latency", 00:05:50.890 "bdev_zone_block_delete", 00:05:50.890 "bdev_zone_block_create", 00:05:50.890 "blobfs_create", 00:05:50.890 "blobfs_detect", 00:05:50.890 "blobfs_set_cache_size", 00:05:50.890 "bdev_aio_delete", 00:05:50.890 "bdev_aio_rescan", 00:05:50.890 "bdev_aio_create", 00:05:50.890 "bdev_ftl_set_property", 00:05:50.890 "bdev_ftl_get_properties", 00:05:50.890 "bdev_ftl_get_stats", 00:05:50.890 "bdev_ftl_unmap", 00:05:50.890 "bdev_ftl_unload", 00:05:50.890 "bdev_ftl_delete", 00:05:50.890 "bdev_ftl_load", 00:05:50.890 "bdev_ftl_create", 00:05:50.890 "bdev_virtio_attach_controller", 00:05:50.890 "bdev_virtio_scsi_get_devices", 00:05:50.890 "bdev_virtio_detach_controller", 00:05:50.890 "bdev_virtio_blk_set_hotplug", 00:05:50.890 "bdev_iscsi_delete", 00:05:50.890 "bdev_iscsi_create", 00:05:50.890 "bdev_iscsi_set_options", 00:05:50.890 "accel_error_inject_error", 00:05:50.890 "ioat_scan_accel_module", 00:05:50.890 "dsa_scan_accel_module", 00:05:50.890 "iaa_scan_accel_module", 00:05:50.890 "vfu_virtio_create_scsi_endpoint", 00:05:50.890 "vfu_virtio_scsi_remove_target", 00:05:50.890 "vfu_virtio_scsi_add_target", 00:05:50.890 "vfu_virtio_create_blk_endpoint", 00:05:50.890 "vfu_virtio_delete_endpoint", 00:05:50.890 "keyring_file_remove_key", 00:05:50.890 "keyring_file_add_key", 00:05:50.890 "keyring_linux_set_options", 00:05:50.890 "iscsi_get_histogram", 00:05:50.890 "iscsi_enable_histogram", 00:05:50.890 "iscsi_set_options", 00:05:50.890 "iscsi_get_auth_groups", 00:05:50.890 "iscsi_auth_group_remove_secret", 00:05:50.890 "iscsi_auth_group_add_secret", 00:05:50.890 "iscsi_delete_auth_group", 00:05:50.890 "iscsi_create_auth_group", 00:05:50.890 "iscsi_set_discovery_auth", 00:05:50.890 "iscsi_get_options", 00:05:50.890 "iscsi_target_node_request_logout", 00:05:50.890 "iscsi_target_node_set_redirect", 00:05:50.890 "iscsi_target_node_set_auth", 00:05:50.890 "iscsi_target_node_add_lun", 00:05:50.890 "iscsi_get_stats", 00:05:50.890 "iscsi_get_connections", 00:05:50.890 "iscsi_portal_group_set_auth", 00:05:50.890 "iscsi_start_portal_group", 00:05:50.890 "iscsi_delete_portal_group", 00:05:50.890 "iscsi_create_portal_group", 00:05:50.890 "iscsi_get_portal_groups", 00:05:50.890 "iscsi_delete_target_node", 00:05:50.890 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.890 "iscsi_target_node_add_pg_ig_maps", 00:05:50.890 "iscsi_create_target_node", 00:05:50.890 "iscsi_get_target_nodes", 00:05:50.890 "iscsi_delete_initiator_group", 00:05:50.890 "iscsi_initiator_group_remove_initiators", 00:05:50.890 "iscsi_initiator_group_add_initiators", 00:05:50.890 "iscsi_create_initiator_group", 00:05:50.890 "iscsi_get_initiator_groups", 00:05:50.890 "nvmf_set_crdt", 00:05:50.890 "nvmf_set_config", 00:05:50.890 "nvmf_set_max_subsystems", 00:05:50.890 "nvmf_stop_mdns_prr", 00:05:50.890 "nvmf_publish_mdns_prr", 00:05:50.890 "nvmf_subsystem_get_listeners", 00:05:50.890 "nvmf_subsystem_get_qpairs", 00:05:50.890 "nvmf_subsystem_get_controllers", 00:05:50.890 "nvmf_get_stats", 00:05:50.890 "nvmf_get_transports", 00:05:50.890 "nvmf_create_transport", 00:05:50.890 "nvmf_get_targets", 00:05:50.890 "nvmf_delete_target", 00:05:50.890 "nvmf_create_target", 00:05:50.890 "nvmf_subsystem_allow_any_host", 00:05:50.890 "nvmf_subsystem_remove_host", 00:05:50.890 "nvmf_subsystem_add_host", 00:05:50.890 "nvmf_ns_remove_host", 00:05:50.890 "nvmf_ns_add_host", 00:05:50.890 "nvmf_subsystem_remove_ns", 00:05:50.890 "nvmf_subsystem_add_ns", 00:05:50.890 "nvmf_subsystem_listener_set_ana_state", 00:05:50.890 "nvmf_discovery_get_referrals", 00:05:50.890 "nvmf_discovery_remove_referral", 00:05:50.890 "nvmf_discovery_add_referral", 00:05:50.890 "nvmf_subsystem_remove_listener", 00:05:50.890 "nvmf_subsystem_add_listener", 00:05:50.890 "nvmf_delete_subsystem", 00:05:50.890 "nvmf_create_subsystem", 00:05:50.890 "nvmf_get_subsystems", 00:05:50.890 "env_dpdk_get_mem_stats", 00:05:50.890 "nbd_get_disks", 00:05:50.890 "nbd_stop_disk", 00:05:50.890 "nbd_start_disk", 00:05:50.890 "ublk_recover_disk", 00:05:50.890 "ublk_get_disks", 00:05:50.890 "ublk_stop_disk", 00:05:50.890 "ublk_start_disk", 00:05:50.890 "ublk_destroy_target", 00:05:50.890 "ublk_create_target", 00:05:50.890 "virtio_blk_create_transport", 00:05:50.890 "virtio_blk_get_transports", 00:05:50.890 "vhost_controller_set_coalescing", 00:05:50.890 "vhost_get_controllers", 00:05:50.890 "vhost_delete_controller", 00:05:50.890 "vhost_create_blk_controller", 00:05:50.890 "vhost_scsi_controller_remove_target", 00:05:50.890 "vhost_scsi_controller_add_target", 00:05:50.890 "vhost_start_scsi_controller", 00:05:50.890 "vhost_create_scsi_controller", 00:05:50.890 "thread_set_cpumask", 00:05:50.890 "framework_get_governor", 00:05:50.890 "framework_get_scheduler", 00:05:50.890 "framework_set_scheduler", 00:05:50.890 "framework_get_reactors", 00:05:50.890 "thread_get_io_channels", 00:05:50.890 "thread_get_pollers", 00:05:50.890 "thread_get_stats", 00:05:50.890 "framework_monitor_context_switch", 00:05:50.890 "spdk_kill_instance", 00:05:50.890 "log_enable_timestamps", 00:05:50.890 "log_get_flags", 00:05:50.890 "log_clear_flag", 00:05:50.890 "log_set_flag", 00:05:50.890 "log_get_level", 00:05:50.890 "log_set_level", 00:05:50.890 "log_get_print_level", 00:05:50.890 "log_set_print_level", 00:05:50.890 "framework_enable_cpumask_locks", 00:05:50.890 "framework_disable_cpumask_locks", 00:05:50.890 "framework_wait_init", 00:05:50.890 "framework_start_init", 00:05:50.890 "scsi_get_devices", 00:05:50.890 "bdev_get_histogram", 00:05:50.890 "bdev_enable_histogram", 00:05:50.890 "bdev_set_qos_limit", 00:05:50.890 "bdev_set_qd_sampling_period", 00:05:50.890 "bdev_get_bdevs", 00:05:50.890 "bdev_reset_iostat", 00:05:50.890 "bdev_get_iostat", 00:05:50.890 "bdev_examine", 00:05:50.890 "bdev_wait_for_examine", 00:05:50.890 "bdev_set_options", 00:05:50.890 "notify_get_notifications", 00:05:50.890 "notify_get_types", 00:05:50.890 "accel_get_stats", 00:05:50.890 "accel_set_options", 00:05:50.890 "accel_set_driver", 00:05:50.890 "accel_crypto_key_destroy", 00:05:50.890 "accel_crypto_keys_get", 00:05:50.890 "accel_crypto_key_create", 00:05:50.890 "accel_assign_opc", 00:05:50.890 "accel_get_module_info", 00:05:50.890 "accel_get_opc_assignments", 00:05:50.890 "vmd_rescan", 00:05:50.890 "vmd_remove_device", 00:05:50.890 "vmd_enable", 00:05:50.890 "sock_get_default_impl", 00:05:50.890 "sock_set_default_impl", 00:05:50.890 "sock_impl_set_options", 00:05:50.890 "sock_impl_get_options", 00:05:50.891 "iobuf_get_stats", 00:05:50.891 "iobuf_set_options", 00:05:50.891 "keyring_get_keys", 00:05:50.891 "framework_get_pci_devices", 00:05:50.891 "framework_get_config", 00:05:50.891 "framework_get_subsystems", 00:05:50.891 "vfu_tgt_set_base_path", 00:05:50.891 "trace_get_info", 00:05:50.891 "trace_get_tpoint_group_mask", 00:05:50.891 "trace_disable_tpoint_group", 00:05:50.891 "trace_enable_tpoint_group", 00:05:50.891 "trace_clear_tpoint_mask", 00:05:50.891 "trace_set_tpoint_mask", 00:05:50.891 "spdk_get_version", 00:05:50.891 "rpc_get_methods" 00:05:50.891 ] 00:05:50.891 09:20:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.891 09:20:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.891 09:20:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 400812 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 400812 ']' 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 400812 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 400812 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 400812' 00:05:50.891 killing process with pid 400812 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 400812 00:05:50.891 09:20:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 400812 00:05:51.456 00:05:51.456 real 0m1.797s 00:05:51.456 user 0m3.465s 00:05:51.456 sys 0m0.474s 00:05:51.456 09:20:23 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.456 09:20:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.456 ************************************ 00:05:51.456 END TEST spdkcli_tcp 00:05:51.456 ************************************ 00:05:51.456 09:20:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.456 09:20:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.456 09:20:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.456 09:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:51.456 ************************************ 00:05:51.456 START TEST dpdk_mem_utility 00:05:51.456 ************************************ 00:05:51.456 09:20:23 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.456 * Looking for test storage... 00:05:51.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:51.456 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.456 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=401025 00:05:51.456 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.456 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 401025 00:05:51.456 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 401025 ']' 00:05:51.456 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.456 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.456 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.456 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.456 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.456 [2024-07-25 09:20:24.059073] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:51.456 [2024-07-25 09:20:24.059164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401025 ] 00:05:51.456 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.456 [2024-07-25 09:20:24.117362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.714 [2024-07-25 09:20:24.224542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.972 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.972 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:51.972 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.972 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.972 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.972 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.972 { 00:05:51.972 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.972 } 00:05:51.972 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.972 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.972 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:51.972 1 heaps totaling size 814.000000 MiB 00:05:51.972 size: 814.000000 MiB heap id: 0 00:05:51.972 end heaps---------- 00:05:51.972 8 mempools totaling size 598.116089 MiB 00:05:51.972 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.972 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.972 size: 84.521057 MiB name: bdev_io_401025 00:05:51.972 size: 51.011292 MiB name: evtpool_401025 00:05:51.972 size: 50.003479 MiB name: msgpool_401025 00:05:51.972 size: 21.763794 MiB name: PDU_Pool 00:05:51.972 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.972 size: 0.026123 MiB name: Session_Pool 00:05:51.972 end mempools------- 00:05:51.972 6 memzones totaling size 4.142822 MiB 00:05:51.972 size: 1.000366 MiB name: RG_ring_0_401025 00:05:51.972 size: 1.000366 MiB name: RG_ring_1_401025 00:05:51.972 size: 1.000366 MiB name: RG_ring_4_401025 00:05:51.972 size: 1.000366 MiB name: RG_ring_5_401025 00:05:51.972 size: 0.125366 MiB name: RG_ring_2_401025 00:05:51.972 size: 0.015991 MiB name: RG_ring_3_401025 00:05:51.972 end memzones------- 00:05:51.972 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.972 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:51.972 list of free elements. size: 12.519348 MiB 00:05:51.972 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:51.972 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:51.972 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:51.972 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:51.972 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:51.972 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:51.972 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:51.972 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:51.972 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:51.972 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:51.972 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:51.972 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:51.972 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:51.972 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:51.972 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:51.972 list of standard malloc elements. size: 199.218079 MiB 00:05:51.972 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:51.972 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:51.972 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:51.972 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:51.972 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:51.972 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:51.972 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:51.972 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:51.972 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:51.972 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:51.972 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:51.972 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:51.972 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:51.972 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:51.973 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:51.973 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:51.973 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:51.973 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:51.973 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:51.973 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:51.973 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:51.973 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:51.973 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:51.973 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:51.973 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:51.973 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:51.973 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:51.973 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:51.973 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:51.973 list of memzone associated elements. size: 602.262573 MiB 00:05:51.973 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:51.973 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.973 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:51.973 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.973 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:51.973 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_401025_0 00:05:51.973 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:51.973 associated memzone info: size: 48.002930 MiB name: MP_evtpool_401025_0 00:05:51.973 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:51.973 associated memzone info: size: 48.002930 MiB name: MP_msgpool_401025_0 00:05:51.973 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:51.973 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.973 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:51.973 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.973 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:51.973 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_401025 00:05:51.973 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:51.973 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_401025 00:05:51.973 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:51.973 associated memzone info: size: 1.007996 MiB name: MP_evtpool_401025 00:05:51.973 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:51.973 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.973 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:51.973 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.973 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:51.973 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.973 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:51.973 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.973 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:51.973 associated memzone info: size: 1.000366 MiB name: RG_ring_0_401025 00:05:51.973 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:51.973 associated memzone info: size: 1.000366 MiB name: RG_ring_1_401025 00:05:51.973 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:51.973 associated memzone info: size: 1.000366 MiB name: RG_ring_4_401025 00:05:51.973 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:51.973 associated memzone info: size: 1.000366 MiB name: RG_ring_5_401025 00:05:51.973 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:51.973 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_401025 00:05:51.973 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:51.973 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.973 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:51.973 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.973 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:51.973 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.973 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:51.973 associated memzone info: size: 0.125366 MiB name: RG_ring_2_401025 00:05:51.973 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:51.973 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.973 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:51.973 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.973 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:51.973 associated memzone info: size: 0.015991 MiB name: RG_ring_3_401025 00:05:51.973 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:51.973 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.973 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:51.973 associated memzone info: size: 0.000183 MiB name: MP_msgpool_401025 00:05:51.973 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:51.973 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_401025 00:05:51.973 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:51.973 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.973 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.973 09:20:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 401025 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 401025 ']' 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 401025 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 401025 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 401025' 00:05:51.973 killing process with pid 401025 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 401025 00:05:51.973 09:20:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 401025 00:05:52.539 00:05:52.539 real 0m1.126s 00:05:52.539 user 0m1.092s 00:05:52.539 sys 0m0.397s 00:05:52.539 09:20:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.539 09:20:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.539 ************************************ 00:05:52.539 END TEST dpdk_mem_utility 00:05:52.539 ************************************ 00:05:52.539 09:20:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:52.539 09:20:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.539 09:20:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.539 09:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.539 ************************************ 00:05:52.539 START TEST event 00:05:52.539 ************************************ 00:05:52.539 09:20:25 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:52.539 * Looking for test storage... 00:05:52.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:52.539 09:20:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:52.539 09:20:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.539 09:20:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.539 09:20:25 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:52.539 09:20:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.539 09:20:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.539 ************************************ 00:05:52.539 START TEST event_perf 00:05:52.539 ************************************ 00:05:52.539 09:20:25 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.539 Running I/O for 1 seconds...[2024-07-25 09:20:25.218747] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:52.539 [2024-07-25 09:20:25.218812] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401217 ] 00:05:52.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.796 [2024-07-25 09:20:25.281367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.796 [2024-07-25 09:20:25.402201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.796 [2024-07-25 09:20:25.402252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.796 [2024-07-25 09:20:25.402372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.796 [2024-07-25 09:20:25.402379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.168 Running I/O for 1 seconds... 00:05:54.168 lcore 0: 230201 00:05:54.168 lcore 1: 230202 00:05:54.168 lcore 2: 230203 00:05:54.168 lcore 3: 230202 00:05:54.168 done. 00:05:54.168 00:05:54.168 real 0m1.321s 00:05:54.168 user 0m4.226s 00:05:54.168 sys 0m0.090s 00:05:54.168 09:20:26 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.168 09:20:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.168 ************************************ 00:05:54.168 END TEST event_perf 00:05:54.168 ************************************ 00:05:54.168 09:20:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.168 09:20:26 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:54.168 09:20:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.168 09:20:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.168 ************************************ 00:05:54.168 START TEST event_reactor 00:05:54.168 ************************************ 00:05:54.168 09:20:26 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.168 [2024-07-25 09:20:26.585504] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:54.168 [2024-07-25 09:20:26.585569] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401402 ] 00:05:54.168 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.168 [2024-07-25 09:20:26.651649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.168 [2024-07-25 09:20:26.769824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.539 test_start 00:05:55.539 oneshot 00:05:55.539 tick 100 00:05:55.539 tick 100 00:05:55.539 tick 250 00:05:55.539 tick 100 00:05:55.539 tick 100 00:05:55.539 tick 250 00:05:55.539 tick 100 00:05:55.539 tick 500 00:05:55.539 tick 100 00:05:55.539 tick 100 00:05:55.539 tick 250 00:05:55.539 tick 100 00:05:55.539 tick 100 00:05:55.539 test_end 00:05:55.539 00:05:55.539 real 0m1.319s 00:05:55.539 user 0m1.230s 00:05:55.539 sys 0m0.084s 00:05:55.539 09:20:27 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.539 09:20:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.539 ************************************ 00:05:55.539 END TEST event_reactor 00:05:55.539 ************************************ 00:05:55.539 09:20:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.539 09:20:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:55.539 09:20:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.539 09:20:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.539 ************************************ 00:05:55.539 START TEST event_reactor_perf 00:05:55.539 ************************************ 00:05:55.539 09:20:27 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.539 [2024-07-25 09:20:27.955529] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:55.539 [2024-07-25 09:20:27.955593] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401650 ] 00:05:55.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.539 [2024-07-25 09:20:28.019985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.539 [2024-07-25 09:20:28.132790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.911 test_start 00:05:56.911 test_end 00:05:56.911 Performance: 357802 events per second 00:05:56.911 00:05:56.911 real 0m1.308s 00:05:56.911 user 0m1.218s 00:05:56.911 sys 0m0.085s 00:05:56.911 09:20:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.911 09:20:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 ************************************ 00:05:56.911 END TEST event_reactor_perf 00:05:56.911 ************************************ 00:05:56.911 09:20:29 event -- event/event.sh@49 -- # uname -s 00:05:56.911 09:20:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.911 09:20:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.911 09:20:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.911 09:20:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.911 09:20:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 ************************************ 00:05:56.911 START TEST event_scheduler 00:05:56.911 ************************************ 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.911 * Looking for test storage... 00:05:56.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=401833 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 401833 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 401833 ']' 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 [2024-07-25 09:20:29.399216] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:56.911 [2024-07-25 09:20:29.399301] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401833 ] 00:05:56.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.911 [2024-07-25 09:20:29.456090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.911 [2024-07-25 09:20:29.564466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.911 [2024-07-25 09:20:29.564523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.911 [2024-07-25 09:20:29.564589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.911 [2024-07-25 09:20:29.564592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 [2024-07-25 09:20:29.617347] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:56.911 [2024-07-25 09:20:29.617381] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.911 [2024-07-25 09:20:29.617399] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.911 [2024-07-25 09:20:29.617411] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.911 [2024-07-25 09:20:29.617421] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.911 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.911 09:20:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.912 09:20:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 [2024-07-25 09:20:29.715918] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.170 09:20:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.170 09:20:29 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.170 09:20:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 ************************************ 00:05:57.170 START TEST scheduler_create_thread 00:05:57.170 ************************************ 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 2 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 3 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 4 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 5 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 6 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 7 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 8 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 9 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 10 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.170 09:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.813 09:20:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.813 00:05:57.813 real 0m0.591s 00:05:57.813 user 0m0.009s 00:05:57.813 sys 0m0.005s 00:05:57.813 09:20:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.813 09:20:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.813 ************************************ 00:05:57.813 END TEST scheduler_create_thread 00:05:57.813 ************************************ 00:05:57.813 09:20:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.813 09:20:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 401833 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 401833 ']' 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 401833 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 401833 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 401833' 00:05:57.813 killing process with pid 401833 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 401833 00:05:57.813 09:20:30 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 401833 00:05:58.424 [2024-07-25 09:20:30.816587] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:58.424 00:05:58.424 real 0m1.771s 00:05:58.424 user 0m2.254s 00:05:58.424 sys 0m0.332s 00:05:58.424 09:20:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.424 09:20:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.424 ************************************ 00:05:58.424 END TEST event_scheduler 00:05:58.424 ************************************ 00:05:58.424 09:20:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:58.424 09:20:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:58.424 09:20:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.424 09:20:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.424 09:20:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.424 ************************************ 00:05:58.424 START TEST app_repeat 00:05:58.424 ************************************ 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=402088 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 402088' 00:05:58.424 Process app_repeat pid: 402088 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:58.424 spdk_app_start Round 0 00:05:58.424 09:20:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 402088 /var/tmp/spdk-nbd.sock 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 402088 ']' 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.424 09:20:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.711 [2024-07-25 09:20:31.150704] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:58.711 [2024-07-25 09:20:31.150780] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402088 ] 00:05:58.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.711 [2024-07-25 09:20:31.216054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.711 [2024-07-25 09:20:31.332559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.711 [2024-07-25 09:20:31.332565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.000 09:20:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.000 09:20:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:59.000 09:20:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.000 Malloc0 00:05:59.000 09:20:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.288 Malloc1 00:05:59.288 09:20:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.288 09:20:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.560 /dev/nbd0 00:05:59.560 09:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.560 09:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.560 1+0 records in 00:05:59.560 1+0 records out 00:05:59.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019657 s, 20.8 MB/s 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.560 09:20:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.560 09:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.560 09:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.560 09:20:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.840 /dev/nbd1 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.840 1+0 records in 00:05:59.840 1+0 records out 00:05:59.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185571 s, 22.1 MB/s 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.840 09:20:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.840 09:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.112 { 00:06:00.112 "nbd_device": "/dev/nbd0", 00:06:00.112 "bdev_name": "Malloc0" 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "nbd_device": "/dev/nbd1", 00:06:00.112 "bdev_name": "Malloc1" 00:06:00.112 } 00:06:00.112 ]' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.112 { 00:06:00.112 "nbd_device": "/dev/nbd0", 00:06:00.112 "bdev_name": "Malloc0" 00:06:00.112 }, 00:06:00.112 { 00:06:00.112 "nbd_device": "/dev/nbd1", 00:06:00.112 "bdev_name": "Malloc1" 00:06:00.112 } 00:06:00.112 ]' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.112 /dev/nbd1' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.112 /dev/nbd1' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.112 256+0 records in 00:06:00.112 256+0 records out 00:06:00.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503759 s, 208 MB/s 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.112 09:20:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.378 256+0 records in 00:06:00.378 256+0 records out 00:06:00.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220791 s, 47.5 MB/s 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.378 256+0 records in 00:06:00.378 256+0 records out 00:06:00.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231996 s, 45.2 MB/s 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.378 09:20:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.653 09:20:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.916 09:20:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.174 09:20:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.174 09:20:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.432 09:20:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.691 [2024-07-25 09:20:34.265539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.691 [2024-07-25 09:20:34.381186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.691 [2024-07-25 09:20:34.381187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.948 [2024-07-25 09:20:34.440206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.948 [2024-07-25 09:20:34.440261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.477 09:20:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.477 09:20:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.477 spdk_app_start Round 1 00:06:04.478 09:20:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 402088 /var/tmp/spdk-nbd.sock 00:06:04.478 09:20:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 402088 ']' 00:06:04.478 09:20:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.478 09:20:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.478 09:20:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.478 09:20:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.478 09:20:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.735 09:20:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.735 09:20:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.735 09:20:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.735 Malloc0 00:06:04.993 09:20:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.993 Malloc1 00:06:05.250 09:20:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.250 09:20:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.508 /dev/nbd0 00:06:05.508 09:20:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.508 09:20:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.508 1+0 records in 00:06:05.508 1+0 records out 00:06:05.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196852 s, 20.8 MB/s 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.508 09:20:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.508 09:20:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.508 09:20:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.508 09:20:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.766 /dev/nbd1 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.766 1+0 records in 00:06:05.766 1+0 records out 00:06:05.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213156 s, 19.2 MB/s 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.766 09:20:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.766 09:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.024 { 00:06:06.024 "nbd_device": "/dev/nbd0", 00:06:06.024 "bdev_name": "Malloc0" 00:06:06.024 }, 00:06:06.024 { 00:06:06.024 "nbd_device": "/dev/nbd1", 00:06:06.024 "bdev_name": "Malloc1" 00:06:06.024 } 00:06:06.024 ]' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.024 { 00:06:06.024 "nbd_device": "/dev/nbd0", 00:06:06.024 "bdev_name": "Malloc0" 00:06:06.024 }, 00:06:06.024 { 00:06:06.024 "nbd_device": "/dev/nbd1", 00:06:06.024 "bdev_name": "Malloc1" 00:06:06.024 } 00:06:06.024 ]' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.024 /dev/nbd1' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.024 /dev/nbd1' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.024 256+0 records in 00:06:06.024 256+0 records out 00:06:06.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493693 s, 212 MB/s 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.024 256+0 records in 00:06:06.024 256+0 records out 00:06:06.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203302 s, 51.6 MB/s 00:06:06.024 09:20:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.025 256+0 records in 00:06:06.025 256+0 records out 00:06:06.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230922 s, 45.4 MB/s 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.025 09:20:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.283 09:20:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.541 09:20:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.799 09:20:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.799 09:20:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.056 09:20:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.314 [2024-07-25 09:20:40.048992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.572 [2024-07-25 09:20:40.167561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.572 [2024-07-25 09:20:40.167566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.572 [2024-07-25 09:20:40.227379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.572 [2024-07-25 09:20:40.227455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.099 09:20:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.099 09:20:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.099 spdk_app_start Round 2 00:06:10.099 09:20:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 402088 /var/tmp/spdk-nbd.sock 00:06:10.099 09:20:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 402088 ']' 00:06:10.099 09:20:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.099 09:20:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.099 09:20:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.099 09:20:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.099 09:20:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.357 09:20:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.357 09:20:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:10.357 09:20:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.615 Malloc0 00:06:10.615 09:20:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.873 Malloc1 00:06:10.873 09:20:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.873 09:20:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.131 /dev/nbd0 00:06:11.131 09:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.131 09:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.131 1+0 records in 00:06:11.131 1+0 records out 00:06:11.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160138 s, 25.6 MB/s 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.131 09:20:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.131 09:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.131 09:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.131 09:20:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.388 /dev/nbd1 00:06:11.388 09:20:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.388 09:20:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.389 1+0 records in 00:06:11.389 1+0 records out 00:06:11.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017485 s, 23.4 MB/s 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.389 09:20:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.389 09:20:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.389 09:20:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.389 09:20:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.389 09:20:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.389 09:20:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.647 { 00:06:11.647 "nbd_device": "/dev/nbd0", 00:06:11.647 "bdev_name": "Malloc0" 00:06:11.647 }, 00:06:11.647 { 00:06:11.647 "nbd_device": "/dev/nbd1", 00:06:11.647 "bdev_name": "Malloc1" 00:06:11.647 } 00:06:11.647 ]' 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.647 { 00:06:11.647 "nbd_device": "/dev/nbd0", 00:06:11.647 "bdev_name": "Malloc0" 00:06:11.647 }, 00:06:11.647 { 00:06:11.647 "nbd_device": "/dev/nbd1", 00:06:11.647 "bdev_name": "Malloc1" 00:06:11.647 } 00:06:11.647 ]' 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.647 /dev/nbd1' 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.647 /dev/nbd1' 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.647 09:20:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.905 256+0 records in 00:06:11.905 256+0 records out 00:06:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496677 s, 211 MB/s 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.905 256+0 records in 00:06:11.905 256+0 records out 00:06:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212217 s, 49.4 MB/s 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.905 256+0 records in 00:06:11.905 256+0 records out 00:06:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222711 s, 47.1 MB/s 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.905 09:20:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.163 09:20:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.421 09:20:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.679 09:20:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.679 09:20:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.937 09:20:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.195 [2024-07-25 09:20:45.840379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.454 [2024-07-25 09:20:45.955946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.454 [2024-07-25 09:20:45.955946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.454 [2024-07-25 09:20:46.012614] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.454 [2024-07-25 09:20:46.012687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.981 09:20:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 402088 /var/tmp/spdk-nbd.sock 00:06:15.981 09:20:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 402088 ']' 00:06:15.981 09:20:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.981 09:20:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.981 09:20:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.981 09:20:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.981 09:20:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.239 09:20:48 event.app_repeat -- event/event.sh@39 -- # killprocess 402088 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 402088 ']' 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 402088 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 402088 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 402088' 00:06:16.239 killing process with pid 402088 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@967 -- # kill 402088 00:06:16.239 09:20:48 event.app_repeat -- common/autotest_common.sh@972 -- # wait 402088 00:06:16.498 spdk_app_start is called in Round 0. 00:06:16.498 Shutdown signal received, stop current app iteration 00:06:16.498 Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 reinitialization... 00:06:16.498 spdk_app_start is called in Round 1. 00:06:16.498 Shutdown signal received, stop current app iteration 00:06:16.498 Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 reinitialization... 00:06:16.498 spdk_app_start is called in Round 2. 00:06:16.498 Shutdown signal received, stop current app iteration 00:06:16.498 Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 reinitialization... 00:06:16.498 spdk_app_start is called in Round 3. 00:06:16.498 Shutdown signal received, stop current app iteration 00:06:16.498 09:20:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:16.498 09:20:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:16.498 00:06:16.498 real 0m17.974s 00:06:16.498 user 0m38.728s 00:06:16.498 sys 0m3.268s 00:06:16.498 09:20:49 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.498 09:20:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.498 ************************************ 00:06:16.498 END TEST app_repeat 00:06:16.498 ************************************ 00:06:16.498 09:20:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:16.498 09:20:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:16.498 09:20:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.498 09:20:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.498 09:20:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.498 ************************************ 00:06:16.498 START TEST cpu_locks 00:06:16.498 ************************************ 00:06:16.498 09:20:49 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:16.498 * Looking for test storage... 00:06:16.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:16.498 09:20:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:16.498 09:20:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:16.498 09:20:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:16.498 09:20:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:16.498 09:20:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.498 09:20:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.498 09:20:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.498 ************************************ 00:06:16.498 START TEST default_locks 00:06:16.498 ************************************ 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=404524 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 404524 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 404524 ']' 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.498 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.756 [2024-07-25 09:20:49.270065] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:16.756 [2024-07-25 09:20:49.270144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404524 ] 00:06:16.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.756 [2024-07-25 09:20:49.327500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.756 [2024-07-25 09:20:49.439769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.014 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.014 09:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:17.014 09:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 404524 00:06:17.014 09:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 404524 00:06:17.014 09:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.580 lslocks: write error 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 404524 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 404524 ']' 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 404524 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 404524 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 404524' 00:06:17.580 killing process with pid 404524 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 404524 00:06:17.580 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 404524 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 404524 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 404524 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 404524 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 404524 ']' 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (404524) - No such process 00:06:17.837 ERROR: process (pid: 404524) is no longer running 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.837 00:06:17.837 real 0m1.348s 00:06:17.837 user 0m1.290s 00:06:17.837 sys 0m0.542s 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.837 09:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.837 ************************************ 00:06:17.837 END TEST default_locks 00:06:17.837 ************************************ 00:06:18.095 09:20:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.095 09:20:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.095 09:20:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.095 09:20:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.095 ************************************ 00:06:18.095 START TEST default_locks_via_rpc 00:06:18.095 ************************************ 00:06:18.095 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:18.095 09:20:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=404701 00:06:18.095 09:20:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.095 09:20:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 404701 00:06:18.096 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 404701 ']' 00:06:18.096 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.096 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.096 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.096 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.096 09:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.096 [2024-07-25 09:20:50.666482] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:18.096 [2024-07-25 09:20:50.666563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404701 ] 00:06:18.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.096 [2024-07-25 09:20:50.732285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.354 [2024-07-25 09:20:50.848664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 404701 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 404701 00:06:18.920 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.178 09:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 404701 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 404701 ']' 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 404701 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 404701 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 404701' 00:06:19.179 killing process with pid 404701 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 404701 00:06:19.179 09:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 404701 00:06:19.744 00:06:19.744 real 0m1.754s 00:06:19.744 user 0m1.900s 00:06:19.744 sys 0m0.553s 00:06:19.744 09:20:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.744 09:20:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.744 ************************************ 00:06:19.744 END TEST default_locks_via_rpc 00:06:19.744 ************************************ 00:06:19.744 09:20:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.744 09:20:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.744 09:20:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.744 09:20:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.744 ************************************ 00:06:19.744 START TEST non_locking_app_on_locked_coremask 00:06:19.744 ************************************ 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=404868 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 404868 /var/tmp/spdk.sock 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 404868 ']' 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.744 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.744 [2024-07-25 09:20:52.460506] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:19.744 [2024-07-25 09:20:52.460599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404868 ] 00:06:20.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.003 [2024-07-25 09:20:52.520023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.003 [2024-07-25 09:20:52.628038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=405001 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 405001 /var/tmp/spdk2.sock 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 405001 ']' 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.261 09:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.261 [2024-07-25 09:20:52.928654] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:20.261 [2024-07-25 09:20:52.928740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405001 ] 00:06:20.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.520 [2024-07-25 09:20:53.019488] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.520 [2024-07-25 09:20:53.019521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.786 [2024-07-25 09:20:53.258044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.352 09:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.352 09:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.352 09:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 404868 00:06:21.352 09:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 404868 00:06:21.352 09:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.611 lslocks: write error 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 404868 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 404868 ']' 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 404868 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 404868 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 404868' 00:06:21.611 killing process with pid 404868 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 404868 00:06:21.611 09:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 404868 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 405001 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 405001 ']' 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 405001 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 405001 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 405001' 00:06:22.545 killing process with pid 405001 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 405001 00:06:22.545 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 405001 00:06:23.113 00:06:23.113 real 0m3.257s 00:06:23.113 user 0m3.391s 00:06:23.113 sys 0m1.046s 00:06:23.113 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.113 09:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.113 ************************************ 00:06:23.113 END TEST non_locking_app_on_locked_coremask 00:06:23.113 ************************************ 00:06:23.113 09:20:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.113 09:20:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.113 09:20:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.113 09:20:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.113 ************************************ 00:06:23.113 START TEST locking_app_on_unlocked_coremask 00:06:23.113 ************************************ 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=405309 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 405309 /var/tmp/spdk.sock 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 405309 ']' 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.113 09:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.113 [2024-07-25 09:20:55.765599] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:23.113 [2024-07-25 09:20:55.765699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405309 ] 00:06:23.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.114 [2024-07-25 09:20:55.823733] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.114 [2024-07-25 09:20:55.823767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.372 [2024-07-25 09:20:55.934477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=405431 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 405431 /var/tmp/spdk2.sock 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 405431 ']' 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.631 09:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.631 [2024-07-25 09:20:56.242702] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:23.631 [2024-07-25 09:20:56.242802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405431 ] 00:06:23.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.631 [2024-07-25 09:20:56.335541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.889 [2024-07-25 09:20:56.570381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.455 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.455 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.455 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 405431 00:06:24.455 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 405431 00:06:24.455 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.021 lslocks: write error 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 405309 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 405309 ']' 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 405309 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 405309 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 405309' 00:06:25.021 killing process with pid 405309 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 405309 00:06:25.021 09:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 405309 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 405431 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 405431 ']' 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 405431 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 405431 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 405431' 00:06:25.955 killing process with pid 405431 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 405431 00:06:25.955 09:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 405431 00:06:26.522 00:06:26.522 real 0m3.297s 00:06:26.522 user 0m3.419s 00:06:26.522 sys 0m1.066s 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.522 ************************************ 00:06:26.522 END TEST locking_app_on_unlocked_coremask 00:06:26.522 ************************************ 00:06:26.522 09:20:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.522 09:20:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.522 09:20:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.522 09:20:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.522 ************************************ 00:06:26.522 START TEST locking_app_on_locked_coremask 00:06:26.522 ************************************ 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=405743 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 405743 /var/tmp/spdk.sock 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 405743 ']' 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.522 09:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.522 [2024-07-25 09:20:59.115223] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:26.522 [2024-07-25 09:20:59.115321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405743 ] 00:06:26.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.522 [2024-07-25 09:20:59.177381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.780 [2024-07-25 09:20:59.291970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.347 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.347 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=405895 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 405895 /var/tmp/spdk2.sock 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 405895 /var/tmp/spdk2.sock 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 405895 /var/tmp/spdk2.sock 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 405895 ']' 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.348 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.605 [2024-07-25 09:21:00.098830] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:27.605 [2024-07-25 09:21:00.098920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405895 ] 00:06:27.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.605 [2024-07-25 09:21:00.195017] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 405743 has claimed it. 00:06:27.605 [2024-07-25 09:21:00.195074] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (405895) - No such process 00:06:28.170 ERROR: process (pid: 405895) is no longer running 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 405743 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 405743 00:06:28.170 09:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.428 lslocks: write error 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 405743 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 405743 ']' 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 405743 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 405743 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 405743' 00:06:28.428 killing process with pid 405743 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 405743 00:06:28.428 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 405743 00:06:28.994 00:06:28.995 real 0m2.472s 00:06:28.995 user 0m2.788s 00:06:28.995 sys 0m0.684s 00:06:28.995 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.995 09:21:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.995 ************************************ 00:06:28.995 END TEST locking_app_on_locked_coremask 00:06:28.995 ************************************ 00:06:28.995 09:21:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:28.995 09:21:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.995 09:21:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.995 09:21:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.995 ************************************ 00:06:28.995 START TEST locking_overlapped_coremask 00:06:28.995 ************************************ 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=406209 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 406209 /var/tmp/spdk.sock 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 406209 ']' 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.995 09:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.995 [2024-07-25 09:21:01.634624] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:28.995 [2024-07-25 09:21:01.634740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406209 ] 00:06:28.995 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.995 [2024-07-25 09:21:01.696849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.253 [2024-07-25 09:21:01.820755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.253 [2024-07-25 09:21:01.820809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.253 [2024-07-25 09:21:01.820806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=406289 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 406289 /var/tmp/spdk2.sock 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 406289 /var/tmp/spdk2.sock 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 406289 /var/tmp/spdk2.sock 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 406289 ']' 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.512 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.512 [2024-07-25 09:21:02.128940] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:29.512 [2024-07-25 09:21:02.129045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406289 ] 00:06:29.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.512 [2024-07-25 09:21:02.220465] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 406209 has claimed it. 00:06:29.512 [2024-07-25 09:21:02.220528] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (406289) - No such process 00:06:30.447 ERROR: process (pid: 406289) is no longer running 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 406209 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 406209 ']' 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 406209 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406209 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406209' 00:06:30.447 killing process with pid 406209 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 406209 00:06:30.447 09:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 406209 00:06:30.706 00:06:30.706 real 0m1.707s 00:06:30.706 user 0m4.514s 00:06:30.706 sys 0m0.464s 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.706 ************************************ 00:06:30.706 END TEST locking_overlapped_coremask 00:06:30.706 ************************************ 00:06:30.706 09:21:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.706 09:21:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.706 09:21:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.706 09:21:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.706 ************************************ 00:06:30.706 START TEST locking_overlapped_coremask_via_rpc 00:06:30.706 ************************************ 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=406453 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 406453 /var/tmp/spdk.sock 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 406453 ']' 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.706 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.706 [2024-07-25 09:21:03.388520] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:30.706 [2024-07-25 09:21:03.388593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406453 ] 00:06:30.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.964 [2024-07-25 09:21:03.447468] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.964 [2024-07-25 09:21:03.447508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.964 [2024-07-25 09:21:03.562147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.964 [2024-07-25 09:21:03.562204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.964 [2024-07-25 09:21:03.562208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.222 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.222 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:31.222 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=406507 00:06:31.222 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.222 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 406507 /var/tmp/spdk2.sock 00:06:31.223 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 406507 ']' 00:06:31.223 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.223 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.223 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.223 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.223 09:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 [2024-07-25 09:21:03.879624] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:31.223 [2024-07-25 09:21:03.879752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406507 ] 00:06:31.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.480 [2024-07-25 09:21:03.969942] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.480 [2024-07-25 09:21:03.969979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.480 [2024-07-25 09:21:04.194439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.480 [2024-07-25 09:21:04.198414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.480 [2024-07-25 09:21:04.198417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.415 [2024-07-25 09:21:04.829458] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 406453 has claimed it. 00:06:32.415 request: 00:06:32.415 { 00:06:32.415 "method": "framework_enable_cpumask_locks", 00:06:32.415 "req_id": 1 00:06:32.415 } 00:06:32.415 Got JSON-RPC error response 00:06:32.415 response: 00:06:32.415 { 00:06:32.415 "code": -32603, 00:06:32.415 "message": "Failed to claim CPU core: 2" 00:06:32.415 } 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.415 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 406453 /var/tmp/spdk.sock 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 406453 ']' 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.416 09:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 406507 /var/tmp/spdk2.sock 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 406507 ']' 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.416 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.674 00:06:32.674 real 0m2.016s 00:06:32.674 user 0m1.033s 00:06:32.674 sys 0m0.188s 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.674 09:21:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.674 ************************************ 00:06:32.674 END TEST locking_overlapped_coremask_via_rpc 00:06:32.674 ************************************ 00:06:32.674 09:21:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.674 09:21:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 406453 ]] 00:06:32.674 09:21:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 406453 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 406453 ']' 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 406453 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406453 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.674 09:21:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406453' 00:06:32.674 killing process with pid 406453 00:06:32.675 09:21:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 406453 00:06:32.675 09:21:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 406453 00:06:33.240 09:21:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 406507 ]] 00:06:33.240 09:21:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 406507 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 406507 ']' 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 406507 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406507 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406507' 00:06:33.240 killing process with pid 406507 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 406507 00:06:33.240 09:21:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 406507 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 406453 ]] 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 406453 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 406453 ']' 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 406453 00:06:33.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (406453) - No such process 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 406453 is not found' 00:06:33.806 Process with pid 406453 is not found 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 406507 ]] 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 406507 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 406507 ']' 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 406507 00:06:33.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (406507) - No such process 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 406507 is not found' 00:06:33.806 Process with pid 406507 is not found 00:06:33.806 09:21:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.806 00:06:33.806 real 0m17.192s 00:06:33.806 user 0m29.408s 00:06:33.806 sys 0m5.424s 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.806 09:21:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.806 ************************************ 00:06:33.806 END TEST cpu_locks 00:06:33.806 ************************************ 00:06:33.806 00:06:33.806 real 0m41.236s 00:06:33.806 user 1m17.205s 00:06:33.806 sys 0m9.516s 00:06:33.806 09:21:06 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.806 09:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.806 ************************************ 00:06:33.806 END TEST event 00:06:33.806 ************************************ 00:06:33.806 09:21:06 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.806 09:21:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.806 09:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.806 09:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:33.806 ************************************ 00:06:33.806 START TEST thread 00:06:33.806 ************************************ 00:06:33.806 09:21:06 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.806 * Looking for test storage... 00:06:33.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:33.806 09:21:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.806 09:21:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:33.806 09:21:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.806 09:21:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.806 ************************************ 00:06:33.806 START TEST thread_poller_perf 00:06:33.806 ************************************ 00:06:33.806 09:21:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.806 [2024-07-25 09:21:06.490120] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:33.806 [2024-07-25 09:21:06.490185] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406949 ] 00:06:33.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.065 [2024-07-25 09:21:06.554178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.065 [2024-07-25 09:21:06.676796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.065 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:35.438 ====================================== 00:06:35.438 busy:2709032447 (cyc) 00:06:35.438 total_run_count: 296000 00:06:35.438 tsc_hz: 2700000000 (cyc) 00:06:35.438 ====================================== 00:06:35.438 poller_cost: 9152 (cyc), 3389 (nsec) 00:06:35.438 00:06:35.438 real 0m1.327s 00:06:35.438 user 0m1.248s 00:06:35.438 sys 0m0.073s 00:06:35.438 09:21:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.438 09:21:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.438 ************************************ 00:06:35.438 END TEST thread_poller_perf 00:06:35.438 ************************************ 00:06:35.438 09:21:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.438 09:21:07 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:35.438 09:21:07 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.438 09:21:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.438 ************************************ 00:06:35.438 START TEST thread_poller_perf 00:06:35.438 ************************************ 00:06:35.438 09:21:07 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.438 [2024-07-25 09:21:07.861300] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:35.438 [2024-07-25 09:21:07.861400] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407264 ] 00:06:35.438 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.438 [2024-07-25 09:21:07.923563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.438 [2024-07-25 09:21:08.041006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.438 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.810 ====================================== 00:06:36.810 busy:2702805215 (cyc) 00:06:36.810 total_run_count: 3821000 00:06:36.810 tsc_hz: 2700000000 (cyc) 00:06:36.810 ====================================== 00:06:36.810 poller_cost: 707 (cyc), 261 (nsec) 00:06:36.810 00:06:36.810 real 0m1.315s 00:06:36.810 user 0m1.224s 00:06:36.810 sys 0m0.084s 00:06:36.810 09:21:09 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.810 09:21:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.810 ************************************ 00:06:36.810 END TEST thread_poller_perf 00:06:36.810 ************************************ 00:06:36.810 09:21:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.810 00:06:36.810 real 0m2.780s 00:06:36.810 user 0m2.525s 00:06:36.810 sys 0m0.251s 00:06:36.810 09:21:09 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.810 09:21:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.810 ************************************ 00:06:36.810 END TEST thread 00:06:36.810 ************************************ 00:06:36.810 09:21:09 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.810 09:21:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.810 09:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.810 09:21:09 -- common/autotest_common.sh@10 -- # set +x 00:06:36.810 ************************************ 00:06:36.810 START TEST accel 00:06:36.810 ************************************ 00:06:36.810 09:21:09 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.810 * Looking for test storage... 00:06:36.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:36.810 09:21:09 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:36.810 09:21:09 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:36.811 09:21:09 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.811 09:21:09 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=407764 00:06:36.811 09:21:09 accel -- accel/accel.sh@63 -- # waitforlisten 407764 00:06:36.811 09:21:09 accel -- common/autotest_common.sh@829 -- # '[' -z 407764 ']' 00:06:36.811 09:21:09 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.811 09:21:09 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:36.811 09:21:09 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:36.811 09:21:09 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.811 09:21:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.811 09:21:09 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.811 09:21:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.811 09:21:09 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.811 09:21:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.811 09:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.811 09:21:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.811 09:21:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.811 09:21:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:36.811 09:21:09 accel -- accel/accel.sh@41 -- # jq -r . 00:06:36.811 [2024-07-25 09:21:09.328896] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:36.811 [2024-07-25 09:21:09.328988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407764 ] 00:06:36.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.811 [2024-07-25 09:21:09.390007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.811 [2024-07-25 09:21:09.499986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@862 -- # return 0 00:06:37.104 09:21:09 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:37.104 09:21:09 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:37.104 09:21:09 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:37.104 09:21:09 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:37.104 09:21:09 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:37.104 09:21:09 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.104 09:21:09 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.104 09:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.104 09:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.104 09:21:09 accel -- accel/accel.sh@75 -- # killprocess 407764 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@948 -- # '[' -z 407764 ']' 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@952 -- # kill -0 407764 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@953 -- # uname 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.104 09:21:09 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 407764 00:06:37.411 09:21:09 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.411 09:21:09 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.411 09:21:09 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 407764' 00:06:37.411 killing process with pid 407764 00:06:37.411 09:21:09 accel -- common/autotest_common.sh@967 -- # kill 407764 00:06:37.411 09:21:09 accel -- common/autotest_common.sh@972 -- # wait 407764 00:06:37.668 09:21:10 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:37.668 09:21:10 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:37.668 09:21:10 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:37.668 09:21:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.668 09:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.668 09:21:10 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:37.668 09:21:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:37.668 09:21:10 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.668 09:21:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:37.668 09:21:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:37.668 09:21:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:37.668 09:21:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.668 09:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.668 ************************************ 00:06:37.668 START TEST accel_missing_filename 00:06:37.668 ************************************ 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.668 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:37.668 09:21:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:37.668 [2024-07-25 09:21:10.390371] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:37.668 [2024-07-25 09:21:10.390458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407983 ] 00:06:37.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.924 [2024-07-25 09:21:10.453254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.924 [2024-07-25 09:21:10.572031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.924 [2024-07-25 09:21:10.630724] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.181 [2024-07-25 09:21:10.715730] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:38.181 A filename is required. 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.181 00:06:38.181 real 0m0.469s 00:06:38.181 user 0m0.347s 00:06:38.181 sys 0m0.156s 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.181 09:21:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:38.181 ************************************ 00:06:38.181 END TEST accel_missing_filename 00:06:38.181 ************************************ 00:06:38.181 09:21:10 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.181 09:21:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:38.181 09:21:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.181 09:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.181 ************************************ 00:06:38.181 START TEST accel_compress_verify 00:06:38.181 ************************************ 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.181 09:21:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.181 09:21:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.181 09:21:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:38.182 09:21:10 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:38.182 [2024-07-25 09:21:10.900768] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:38.182 [2024-07-25 09:21:10.900838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408128 ] 00:06:38.438 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.438 [2024-07-25 09:21:10.963056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.438 [2024-07-25 09:21:11.075054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.438 [2024-07-25 09:21:11.134932] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.695 [2024-07-25 09:21:11.219239] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:38.695 00:06:38.695 Compression does not support the verify option, aborting. 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.695 00:06:38.695 real 0m0.462s 00:06:38.695 user 0m0.355s 00:06:38.695 sys 0m0.142s 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.695 09:21:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:38.695 ************************************ 00:06:38.695 END TEST accel_compress_verify 00:06:38.695 ************************************ 00:06:38.695 09:21:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.695 09:21:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.695 09:21:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.695 09:21:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.695 ************************************ 00:06:38.695 START TEST accel_wrong_workload 00:06:38.695 ************************************ 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:38.695 09:21:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:38.695 Unsupported workload type: foobar 00:06:38.695 [2024-07-25 09:21:11.406270] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:38.695 accel_perf options: 00:06:38.695 [-h help message] 00:06:38.695 [-q queue depth per core] 00:06:38.695 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.695 [-T number of threads per core 00:06:38.695 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.695 [-t time in seconds] 00:06:38.695 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.695 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:38.695 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.695 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.695 [-S for crc32c workload, use this seed value (default 0) 00:06:38.695 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.695 [-f for fill workload, use this BYTE value (default 255) 00:06:38.695 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.695 [-y verify result if this switch is on] 00:06:38.695 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.695 Can be used to spread operations across a wider range of memory. 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.695 00:06:38.695 real 0m0.023s 00:06:38.695 user 0m0.013s 00:06:38.695 sys 0m0.010s 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.695 09:21:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:38.695 ************************************ 00:06:38.695 END TEST accel_wrong_workload 00:06:38.695 ************************************ 00:06:38.695 Error: writing output failed: Broken pipe 00:06:38.953 09:21:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.953 09:21:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:38.953 09:21:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.953 09:21:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.953 ************************************ 00:06:38.953 START TEST accel_negative_buffers 00:06:38.953 ************************************ 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:38.953 09:21:11 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:38.953 -x option must be non-negative. 00:06:38.953 [2024-07-25 09:21:11.476831] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:38.953 accel_perf options: 00:06:38.953 [-h help message] 00:06:38.953 [-q queue depth per core] 00:06:38.953 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.953 [-T number of threads per core 00:06:38.953 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.953 [-t time in seconds] 00:06:38.953 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.953 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:38.953 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.953 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.953 [-S for crc32c workload, use this seed value (default 0) 00:06:38.953 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.953 [-f for fill workload, use this BYTE value (default 255) 00:06:38.953 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.953 [-y verify result if this switch is on] 00:06:38.953 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.953 Can be used to spread operations across a wider range of memory. 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.953 00:06:38.953 real 0m0.025s 00:06:38.953 user 0m0.010s 00:06:38.953 sys 0m0.014s 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.953 09:21:11 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:38.953 ************************************ 00:06:38.953 END TEST accel_negative_buffers 00:06:38.953 ************************************ 00:06:38.953 Error: writing output failed: Broken pipe 00:06:38.953 09:21:11 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:38.953 09:21:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:38.953 09:21:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.953 09:21:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.953 ************************************ 00:06:38.953 START TEST accel_crc32c 00:06:38.953 ************************************ 00:06:38.953 09:21:11 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:38.953 09:21:11 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:38.953 [2024-07-25 09:21:11.540662] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:38.953 [2024-07-25 09:21:11.540726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408194 ] 00:06:38.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.953 [2024-07-25 09:21:11.604232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.211 [2024-07-25 09:21:11.722860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.212 09:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.584 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:40.585 09:21:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.585 00:06:40.585 real 0m1.467s 00:06:40.585 user 0m1.331s 00:06:40.585 sys 0m0.139s 00:06:40.585 09:21:12 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.585 09:21:12 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:40.585 ************************************ 00:06:40.585 END TEST accel_crc32c 00:06:40.585 ************************************ 00:06:40.585 09:21:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:40.585 09:21:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:40.585 09:21:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.585 09:21:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.585 ************************************ 00:06:40.585 START TEST accel_crc32c_C2 00:06:40.585 ************************************ 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:40.585 [2024-07-25 09:21:13.056149] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:40.585 [2024-07-25 09:21:13.056213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408471 ] 00:06:40.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.585 [2024-07-25 09:21:13.118007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.585 [2024-07-25 09:21:13.236425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.585 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.586 09:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.955 00:06:41.955 real 0m1.467s 00:06:41.955 user 0m1.326s 00:06:41.955 sys 0m0.144s 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.955 09:21:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:41.955 ************************************ 00:06:41.955 END TEST accel_crc32c_C2 00:06:41.955 ************************************ 00:06:41.955 09:21:14 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:41.955 09:21:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.955 09:21:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.955 09:21:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.955 ************************************ 00:06:41.955 START TEST accel_copy 00:06:41.955 ************************************ 00:06:41.955 09:21:14 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:41.955 09:21:14 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:41.955 [2024-07-25 09:21:14.573264] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:41.956 [2024-07-25 09:21:14.573324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408627 ] 00:06:41.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.956 [2024-07-25 09:21:14.634968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.213 [2024-07-25 09:21:14.754546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.213 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.214 09:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:43.585 09:21:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.585 00:06:43.585 real 0m1.474s 00:06:43.585 user 0m1.321s 00:06:43.585 sys 0m0.155s 00:06:43.585 09:21:16 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.585 09:21:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.585 ************************************ 00:06:43.585 END TEST accel_copy 00:06:43.585 ************************************ 00:06:43.585 09:21:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.585 09:21:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:43.585 09:21:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.585 09:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.585 ************************************ 00:06:43.585 START TEST accel_fill 00:06:43.585 ************************************ 00:06:43.586 09:21:16 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:43.586 09:21:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:43.586 [2024-07-25 09:21:16.094942] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:43.586 [2024-07-25 09:21:16.095003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408786 ] 00:06:43.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.586 [2024-07-25 09:21:16.156655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.586 [2024-07-25 09:21:16.280031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.843 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.844 09:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.215 09:21:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:45.216 09:21:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.216 00:06:45.216 real 0m1.477s 00:06:45.216 user 0m1.330s 00:06:45.216 sys 0m0.149s 00:06:45.216 09:21:17 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.216 09:21:17 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:45.216 ************************************ 00:06:45.216 END TEST accel_fill 00:06:45.216 ************************************ 00:06:45.216 09:21:17 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:45.216 09:21:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.216 09:21:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.216 09:21:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.216 ************************************ 00:06:45.216 START TEST accel_copy_crc32c 00:06:45.216 ************************************ 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:45.216 [2024-07-25 09:21:17.616364] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:45.216 [2024-07-25 09:21:17.616443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409057 ] 00:06:45.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.216 [2024-07-25 09:21:17.679344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.216 [2024-07-25 09:21:17.797690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.216 09:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.588 00:06:46.588 real 0m1.462s 00:06:46.588 user 0m1.320s 00:06:46.588 sys 0m0.145s 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.588 09:21:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:46.588 ************************************ 00:06:46.588 END TEST accel_copy_crc32c 00:06:46.588 ************************************ 00:06:46.588 09:21:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.588 09:21:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:46.588 09:21:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.588 09:21:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.588 ************************************ 00:06:46.588 START TEST accel_copy_crc32c_C2 00:06:46.588 ************************************ 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:46.588 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.589 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:46.589 [2024-07-25 09:21:19.128541] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:46.589 [2024-07-25 09:21:19.128604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409221 ] 00:06:46.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.589 [2024-07-25 09:21:19.193456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.589 [2024-07-25 09:21:19.311350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.846 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.847 09:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.218 00:06:48.218 real 0m1.481s 00:06:48.218 user 0m1.335s 00:06:48.218 sys 0m0.148s 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.218 09:21:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:48.218 ************************************ 00:06:48.218 END TEST accel_copy_crc32c_C2 00:06:48.218 ************************************ 00:06:48.218 09:21:20 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:48.218 09:21:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.218 09:21:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.218 09:21:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.218 ************************************ 00:06:48.218 START TEST accel_dualcast 00:06:48.218 ************************************ 00:06:48.218 09:21:20 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:48.218 09:21:20 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:48.218 [2024-07-25 09:21:20.654492] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:48.218 [2024-07-25 09:21:20.654555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409372 ] 00:06:48.219 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.219 [2024-07-25 09:21:20.719383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.219 [2024-07-25 09:21:20.836593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.219 09:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:49.591 09:21:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.591 00:06:49.591 real 0m1.470s 00:06:49.591 user 0m1.327s 00:06:49.591 sys 0m0.145s 00:06:49.591 09:21:22 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.591 09:21:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:49.591 ************************************ 00:06:49.591 END TEST accel_dualcast 00:06:49.591 ************************************ 00:06:49.591 09:21:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:49.591 09:21:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.591 09:21:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.591 09:21:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.591 ************************************ 00:06:49.591 START TEST accel_compare 00:06:49.591 ************************************ 00:06:49.591 09:21:22 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:49.591 09:21:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:49.591 [2024-07-25 09:21:22.175556] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:49.591 [2024-07-25 09:21:22.175616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409651 ] 00:06:49.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.591 [2024-07-25 09:21:22.236793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.849 [2024-07-25 09:21:22.355261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.849 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.850 09:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:51.222 09:21:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.222 00:06:51.222 real 0m1.476s 00:06:51.222 user 0m1.332s 00:06:51.222 sys 0m0.146s 00:06:51.222 09:21:23 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.222 09:21:23 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:51.222 ************************************ 00:06:51.222 END TEST accel_compare 00:06:51.222 ************************************ 00:06:51.222 09:21:23 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:51.222 09:21:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.222 09:21:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.222 09:21:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.222 ************************************ 00:06:51.222 START TEST accel_xor 00:06:51.222 ************************************ 00:06:51.222 09:21:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.222 [2024-07-25 09:21:23.698632] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:51.222 [2024-07-25 09:21:23.698696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409807 ] 00:06:51.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.222 [2024-07-25 09:21:23.764235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.222 [2024-07-25 09:21:23.887296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.222 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.479 09:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.851 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.852 00:06:52.852 real 0m1.491s 00:06:52.852 user 0m1.346s 00:06:52.852 sys 0m0.147s 00:06:52.852 09:21:25 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.852 09:21:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:52.852 ************************************ 00:06:52.852 END TEST accel_xor 00:06:52.852 ************************************ 00:06:52.852 09:21:25 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:52.852 09:21:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:52.852 09:21:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.852 09:21:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.852 ************************************ 00:06:52.852 START TEST accel_xor 00:06:52.852 ************************************ 00:06:52.852 09:21:25 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:52.852 [2024-07-25 09:21:25.237166] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:52.852 [2024-07-25 09:21:25.237231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409979 ] 00:06:52.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.852 [2024-07-25 09:21:25.302669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.852 [2024-07-25 09:21:25.422767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 09:21:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:54.225 09:21:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.225 00:06:54.225 real 0m1.478s 00:06:54.225 user 0m1.338s 00:06:54.225 sys 0m0.142s 00:06:54.225 09:21:26 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.225 09:21:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:54.225 ************************************ 00:06:54.225 END TEST accel_xor 00:06:54.225 ************************************ 00:06:54.225 09:21:26 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.225 09:21:26 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:54.225 09:21:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.225 09:21:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.225 ************************************ 00:06:54.225 START TEST accel_dif_verify 00:06:54.225 ************************************ 00:06:54.225 09:21:26 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:54.225 09:21:26 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:54.225 [2024-07-25 09:21:26.761082] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:54.225 [2024-07-25 09:21:26.761146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410237 ] 00:06:54.225 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.225 [2024-07-25 09:21:26.826795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.225 [2024-07-25 09:21:26.949719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.483 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.484 09:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:55.855 09:21:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.855 00:06:55.855 real 0m1.477s 00:06:55.855 user 0m1.337s 00:06:55.855 sys 0m0.144s 00:06:55.855 09:21:28 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.855 09:21:28 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:55.855 ************************************ 00:06:55.855 END TEST accel_dif_verify 00:06:55.855 ************************************ 00:06:55.855 09:21:28 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:55.855 09:21:28 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:55.855 09:21:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.855 09:21:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.855 ************************************ 00:06:55.855 START TEST accel_dif_generate 00:06:55.855 ************************************ 00:06:55.855 09:21:28 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:55.855 [2024-07-25 09:21:28.283094] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:55.855 [2024-07-25 09:21:28.283156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410402 ] 00:06:55.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.855 [2024-07-25 09:21:28.348327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.855 [2024-07-25 09:21:28.471078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.855 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.856 09:21:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:57.228 09:21:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.228 00:06:57.228 real 0m1.490s 00:06:57.228 user 0m1.347s 00:06:57.228 sys 0m0.148s 00:06:57.228 09:21:29 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.228 09:21:29 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:57.228 ************************************ 00:06:57.229 END TEST accel_dif_generate 00:06:57.229 ************************************ 00:06:57.229 09:21:29 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.229 09:21:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:57.229 09:21:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.229 09:21:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.229 ************************************ 00:06:57.229 START TEST accel_dif_generate_copy 00:06:57.229 ************************************ 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:57.229 09:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:57.229 [2024-07-25 09:21:29.819532] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:57.229 [2024-07-25 09:21:29.819598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410580 ] 00:06:57.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.229 [2024-07-25 09:21:29.886762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.487 [2024-07-25 09:21:30.008453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.487 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.488 09:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.859 00:06:58.859 real 0m1.495s 00:06:58.859 user 0m1.354s 00:06:58.859 sys 0m0.143s 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.859 09:21:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.859 ************************************ 00:06:58.859 END TEST accel_dif_generate_copy 00:06:58.859 ************************************ 00:06:58.859 09:21:31 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:58.859 09:21:31 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.859 09:21:31 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:58.859 09:21:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.859 09:21:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.859 ************************************ 00:06:58.859 START TEST accel_comp 00:06:58.859 ************************************ 00:06:58.859 09:21:31 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:58.859 09:21:31 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:58.859 [2024-07-25 09:21:31.361474] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:58.859 [2024-07-25 09:21:31.361538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410832 ] 00:06:58.859 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.859 [2024-07-25 09:21:31.423390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.859 [2024-07-25 09:21:31.546079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.117 09:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:00.490 09:21:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.490 00:07:00.490 real 0m1.490s 00:07:00.490 user 0m1.355s 00:07:00.490 sys 0m0.138s 00:07:00.490 09:21:32 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.490 09:21:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:00.490 ************************************ 00:07:00.490 END TEST accel_comp 00:07:00.490 ************************************ 00:07:00.490 09:21:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.490 09:21:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.490 09:21:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.490 09:21:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.490 ************************************ 00:07:00.490 START TEST accel_decomp 00:07:00.490 ************************************ 00:07:00.490 09:21:32 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.490 09:21:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:00.490 09:21:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:00.490 09:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.490 09:21:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.490 09:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:00.491 09:21:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:00.491 [2024-07-25 09:21:32.898458] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:00.491 [2024-07-25 09:21:32.898524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410987 ] 00:07:00.491 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.491 [2024-07-25 09:21:32.961632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.491 [2024-07-25 09:21:33.083457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.491 09:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.862 09:21:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.862 00:07:01.862 real 0m1.491s 00:07:01.862 user 0m1.347s 00:07:01.862 sys 0m0.147s 00:07:01.862 09:21:34 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.862 09:21:34 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:01.862 ************************************ 00:07:01.862 END TEST accel_decomp 00:07:01.862 ************************************ 00:07:01.862 09:21:34 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.862 09:21:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:01.862 09:21:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.862 09:21:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.862 ************************************ 00:07:01.862 START TEST accel_decomp_full 00:07:01.862 ************************************ 00:07:01.862 09:21:34 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:01.862 09:21:34 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:01.863 [2024-07-25 09:21:34.438005] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:01.863 [2024-07-25 09:21:34.438069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411224 ] 00:07:01.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.863 [2024-07-25 09:21:34.504858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.120 [2024-07-25 09:21:34.628152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.120 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.121 09:21:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.492 09:21:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.492 00:07:03.492 real 0m1.501s 00:07:03.492 user 0m1.355s 00:07:03.492 sys 0m0.149s 00:07:03.492 09:21:35 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.492 09:21:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:03.492 ************************************ 00:07:03.492 END TEST accel_decomp_full 00:07:03.492 ************************************ 00:07:03.492 09:21:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.492 09:21:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:03.492 09:21:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.492 09:21:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.492 ************************************ 00:07:03.492 START TEST accel_decomp_mcore 00:07:03.492 ************************************ 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:03.492 09:21:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:03.492 [2024-07-25 09:21:35.986114] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:03.492 [2024-07-25 09:21:35.986179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411418 ] 00:07:03.492 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.492 [2024-07-25 09:21:36.049288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.492 [2024-07-25 09:21:36.176934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.492 [2024-07-25 09:21:36.176985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.492 [2024-07-25 09:21:36.177039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.492 [2024-07-25 09:21:36.177043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.751 09:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.124 00:07:05.124 real 0m1.504s 00:07:05.124 user 0m4.831s 00:07:05.124 sys 0m0.153s 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.124 09:21:37 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:05.124 ************************************ 00:07:05.124 END TEST accel_decomp_mcore 00:07:05.124 ************************************ 00:07:05.124 09:21:37 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.124 09:21:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:05.124 09:21:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.124 09:21:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.124 ************************************ 00:07:05.124 START TEST accel_decomp_full_mcore 00:07:05.124 ************************************ 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:05.124 [2024-07-25 09:21:37.532845] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:05.124 [2024-07-25 09:21:37.532910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411584 ] 00:07:05.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.124 [2024-07-25 09:21:37.595858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.124 [2024-07-25 09:21:37.721519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.124 [2024-07-25 09:21:37.721572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.124 [2024-07-25 09:21:37.721623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.124 [2024-07-25 09:21:37.721627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.124 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.125 09:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.498 00:07:06.498 real 0m1.507s 00:07:06.498 user 0m4.848s 00:07:06.498 sys 0m0.161s 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.498 09:21:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:06.498 ************************************ 00:07:06.498 END TEST accel_decomp_full_mcore 00:07:06.498 ************************************ 00:07:06.498 09:21:39 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.498 09:21:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:06.498 09:21:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.498 09:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.498 ************************************ 00:07:06.498 START TEST accel_decomp_mthread 00:07:06.498 ************************************ 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.498 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.498 [2024-07-25 09:21:39.085969] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:06.498 [2024-07-25 09:21:39.086035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411854 ] 00:07:06.498 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.498 [2024-07-25 09:21:39.151211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.757 [2024-07-25 09:21:39.272849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.757 09:21:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.130 00:07:08.130 real 0m1.487s 00:07:08.130 user 0m1.347s 00:07:08.130 sys 0m0.143s 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.130 09:21:40 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:08.130 ************************************ 00:07:08.130 END TEST accel_decomp_mthread 00:07:08.130 ************************************ 00:07:08.130 09:21:40 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.130 09:21:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:08.130 09:21:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.130 09:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.130 ************************************ 00:07:08.130 START TEST accel_decomp_full_mthread 00:07:08.130 ************************************ 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:08.130 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:08.130 [2024-07-25 09:21:40.627728] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:08.130 [2024-07-25 09:21:40.627794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412018 ] 00:07:08.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.130 [2024-07-25 09:21:40.691040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.130 [2024-07-25 09:21:40.812034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.388 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.389 09:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.760 00:07:09.760 real 0m1.529s 00:07:09.760 user 0m1.384s 00:07:09.760 sys 0m0.148s 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.760 09:21:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.760 ************************************ 00:07:09.760 END TEST accel_decomp_full_mthread 00:07:09.760 ************************************ 00:07:09.760 09:21:42 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:09.760 09:21:42 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:09.760 09:21:42 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:09.760 09:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:09.760 09:21:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.761 09:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.761 09:21:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.761 09:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.761 09:21:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.761 09:21:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.761 09:21:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.761 09:21:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:09.761 09:21:42 accel -- accel/accel.sh@41 -- # jq -r . 00:07:09.761 ************************************ 00:07:09.761 START TEST accel_dif_functional_tests 00:07:09.761 ************************************ 00:07:09.761 09:21:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:09.761 [2024-07-25 09:21:42.224606] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:09.761 [2024-07-25 09:21:42.224684] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412174 ] 00:07:09.761 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.761 [2024-07-25 09:21:42.285859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.761 [2024-07-25 09:21:42.411391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.761 [2024-07-25 09:21:42.411421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.761 [2024-07-25 09:21:42.411425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.021 00:07:10.021 00:07:10.021 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.021 http://cunit.sourceforge.net/ 00:07:10.021 00:07:10.021 00:07:10.021 Suite: accel_dif 00:07:10.021 Test: verify: DIF generated, GUARD check ...passed 00:07:10.021 Test: verify: DIF generated, APPTAG check ...passed 00:07:10.021 Test: verify: DIF generated, REFTAG check ...passed 00:07:10.021 Test: verify: DIF not generated, GUARD check ...[2024-07-25 09:21:42.512429] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:10.021 passed 00:07:10.021 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 09:21:42.512502] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:10.021 passed 00:07:10.021 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 09:21:42.512542] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:10.021 passed 00:07:10.021 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:10.021 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 09:21:42.512615] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:10.021 passed 00:07:10.021 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:10.021 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:10.021 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:10.021 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 09:21:42.512793] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:10.021 passed 00:07:10.021 Test: verify copy: DIF generated, GUARD check ...passed 00:07:10.021 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:10.021 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:10.021 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 09:21:42.512975] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:10.021 passed 00:07:10.021 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 09:21:42.513019] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:10.021 passed 00:07:10.021 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 09:21:42.513059] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:10.021 passed 00:07:10.021 Test: generate copy: DIF generated, GUARD check ...passed 00:07:10.021 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:10.021 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:10.021 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:10.021 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:10.021 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:10.021 Test: generate copy: iovecs-len validate ...[2024-07-25 09:21:42.513328] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:10.021 passed 00:07:10.021 Test: generate copy: buffer alignment validate ...passed 00:07:10.021 00:07:10.021 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.021 suites 1 1 n/a 0 0 00:07:10.021 tests 26 26 26 0 0 00:07:10.021 asserts 115 115 115 0 n/a 00:07:10.021 00:07:10.021 Elapsed time = 0.005 seconds 00:07:10.280 00:07:10.280 real 0m0.601s 00:07:10.280 user 0m0.913s 00:07:10.280 sys 0m0.187s 00:07:10.280 09:21:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.280 09:21:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:10.280 ************************************ 00:07:10.280 END TEST accel_dif_functional_tests 00:07:10.280 ************************************ 00:07:10.280 00:07:10.280 real 0m33.577s 00:07:10.280 user 0m37.005s 00:07:10.280 sys 0m4.662s 00:07:10.280 09:21:42 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.280 09:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.280 ************************************ 00:07:10.280 END TEST accel 00:07:10.280 ************************************ 00:07:10.280 09:21:42 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:10.280 09:21:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.280 09:21:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.280 09:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:10.280 ************************************ 00:07:10.280 START TEST accel_rpc 00:07:10.280 ************************************ 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:10.280 * Looking for test storage... 00:07:10.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:10.280 09:21:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:10.280 09:21:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=412365 00:07:10.280 09:21:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:10.280 09:21:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 412365 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 412365 ']' 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.280 09:21:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.280 [2024-07-25 09:21:42.958486] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:10.280 [2024-07-25 09:21:42.958582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412365 ] 00:07:10.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.538 [2024-07-25 09:21:43.015718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.538 [2024-07-25 09:21:43.122068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.538 09:21:43 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.538 09:21:43 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:10.538 09:21:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:10.538 09:21:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:10.538 09:21:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:10.538 09:21:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:10.538 09:21:43 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:10.538 09:21:43 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.538 09:21:43 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.538 09:21:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 ************************************ 00:07:10.538 START TEST accel_assign_opcode 00:07:10.538 ************************************ 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 [2024-07-25 09:21:43.182662] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 [2024-07-25 09:21:43.190675] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.538 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.796 software 00:07:10.796 00:07:10.796 real 0m0.299s 00:07:10.796 user 0m0.037s 00:07:10.796 sys 0m0.006s 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.796 09:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.796 ************************************ 00:07:10.796 END TEST accel_assign_opcode 00:07:10.796 ************************************ 00:07:10.796 09:21:43 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 412365 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 412365 ']' 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 412365 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 412365 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 412365' 00:07:10.796 killing process with pid 412365 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@967 -- # kill 412365 00:07:10.796 09:21:43 accel_rpc -- common/autotest_common.sh@972 -- # wait 412365 00:07:11.360 00:07:11.360 real 0m1.142s 00:07:11.360 user 0m1.052s 00:07:11.360 sys 0m0.434s 00:07:11.360 09:21:43 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.360 09:21:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.360 ************************************ 00:07:11.360 END TEST accel_rpc 00:07:11.360 ************************************ 00:07:11.360 09:21:44 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.360 09:21:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.360 09:21:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.360 09:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.360 ************************************ 00:07:11.360 START TEST app_cmdline 00:07:11.360 ************************************ 00:07:11.360 09:21:44 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.360 * Looking for test storage... 00:07:11.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.619 09:21:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.619 09:21:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=412569 00:07:11.619 09:21:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.619 09:21:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 412569 00:07:11.619 09:21:44 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 412569 ']' 00:07:11.619 09:21:44 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.619 09:21:44 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.619 09:21:44 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.619 09:21:44 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.619 09:21:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.619 [2024-07-25 09:21:44.150880] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:11.619 [2024-07-25 09:21:44.150965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412569 ] 00:07:11.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.619 [2024-07-25 09:21:44.212435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.619 [2024-07-25 09:21:44.332995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.551 09:21:45 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.551 09:21:45 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:12.551 09:21:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:12.809 { 00:07:12.809 "version": "SPDK v24.09-pre git sha1 c0d54772e", 00:07:12.809 "fields": { 00:07:12.809 "major": 24, 00:07:12.809 "minor": 9, 00:07:12.809 "patch": 0, 00:07:12.809 "suffix": "-pre", 00:07:12.809 "commit": "c0d54772e" 00:07:12.809 } 00:07:12.809 } 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.809 09:21:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.809 09:21:45 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.067 request: 00:07:13.067 { 00:07:13.067 "method": "env_dpdk_get_mem_stats", 00:07:13.067 "req_id": 1 00:07:13.067 } 00:07:13.067 Got JSON-RPC error response 00:07:13.067 response: 00:07:13.067 { 00:07:13.067 "code": -32601, 00:07:13.067 "message": "Method not found" 00:07:13.067 } 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.067 09:21:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 412569 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 412569 ']' 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 412569 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 412569 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 412569' 00:07:13.067 killing process with pid 412569 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@967 -- # kill 412569 00:07:13.067 09:21:45 app_cmdline -- common/autotest_common.sh@972 -- # wait 412569 00:07:13.633 00:07:13.633 real 0m2.145s 00:07:13.633 user 0m2.722s 00:07:13.633 sys 0m0.499s 00:07:13.633 09:21:46 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.633 09:21:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.633 ************************************ 00:07:13.633 END TEST app_cmdline 00:07:13.633 ************************************ 00:07:13.633 09:21:46 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:13.633 09:21:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.634 09:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.634 09:21:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.634 ************************************ 00:07:13.634 START TEST version 00:07:13.634 ************************************ 00:07:13.634 09:21:46 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:13.634 * Looking for test storage... 00:07:13.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.634 09:21:46 version -- app/version.sh@17 -- # get_header_version major 00:07:13.634 09:21:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # cut -f2 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.634 09:21:46 version -- app/version.sh@17 -- # major=24 00:07:13.634 09:21:46 version -- app/version.sh@18 -- # get_header_version minor 00:07:13.634 09:21:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # cut -f2 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.634 09:21:46 version -- app/version.sh@18 -- # minor=9 00:07:13.634 09:21:46 version -- app/version.sh@19 -- # get_header_version patch 00:07:13.634 09:21:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # cut -f2 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.634 09:21:46 version -- app/version.sh@19 -- # patch=0 00:07:13.634 09:21:46 version -- app/version.sh@20 -- # get_header_version suffix 00:07:13.634 09:21:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # cut -f2 00:07:13.634 09:21:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.634 09:21:46 version -- app/version.sh@20 -- # suffix=-pre 00:07:13.634 09:21:46 version -- app/version.sh@22 -- # version=24.9 00:07:13.634 09:21:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:13.634 09:21:46 version -- app/version.sh@28 -- # version=24.9rc0 00:07:13.634 09:21:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:13.634 09:21:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:13.634 09:21:46 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:13.634 09:21:46 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:13.634 00:07:13.634 real 0m0.112s 00:07:13.634 user 0m0.055s 00:07:13.634 sys 0m0.078s 00:07:13.634 09:21:46 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.634 09:21:46 version -- common/autotest_common.sh@10 -- # set +x 00:07:13.634 ************************************ 00:07:13.634 END TEST version 00:07:13.634 ************************************ 00:07:13.892 09:21:46 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@198 -- # uname -s 00:07:13.892 09:21:46 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:13.892 09:21:46 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:13.892 09:21:46 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:13.892 09:21:46 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:13.892 09:21:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.892 09:21:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.892 09:21:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:13.892 09:21:46 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:13.892 09:21:46 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:13.893 09:21:46 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.893 09:21:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.893 09:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.893 09:21:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.893 ************************************ 00:07:13.893 START TEST nvmf_tcp 00:07:13.893 ************************************ 00:07:13.893 09:21:46 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.893 * Looking for test storage... 00:07:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:13.893 09:21:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:13.893 09:21:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.893 09:21:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:13.893 09:21:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.893 09:21:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.893 09:21:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.893 ************************************ 00:07:13.893 START TEST nvmf_target_core 00:07:13.893 ************************************ 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:13.893 * Looking for test storage... 00:07:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.893 ************************************ 00:07:13.893 START TEST nvmf_abort 00:07:13.893 ************************************ 00:07:13.893 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:13.893 * Looking for test storage... 00:07:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.151 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.152 09:21:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.053 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:16.054 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:16.054 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:16.054 Found net devices under 0000:82:00.0: cvl_0_0 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:16.054 Found net devices under 0000:82:00.1: cvl_0_1 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.054 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:07:16.313 00:07:16.313 --- 10.0.0.2 ping statistics --- 00:07:16.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.313 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:07:16.313 00:07:16.313 --- 10.0.0.1 ping statistics --- 00:07:16.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.313 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=414628 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 414628 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 414628 ']' 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.313 09:21:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.313 [2024-07-25 09:21:48.923456] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:16.313 [2024-07-25 09:21:48.923536] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.313 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.313 [2024-07-25 09:21:48.992002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.571 [2024-07-25 09:21:49.113656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.571 [2024-07-25 09:21:49.113720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.571 [2024-07-25 09:21:49.113737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.571 [2024-07-25 09:21:49.113752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.571 [2024-07-25 09:21:49.113764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.571 [2024-07-25 09:21:49.113855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.571 [2024-07-25 09:21:49.113906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.571 [2024-07-25 09:21:49.113910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.137 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.137 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:17.137 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.137 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:17.137 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 [2024-07-25 09:21:49.888746] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 Malloc0 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 Delay0 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 [2024-07-25 09:21:49.963916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.395 09:21:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:17.395 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.395 [2024-07-25 09:21:50.070855] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:19.922 Initializing NVMe Controllers 00:07:19.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:19.922 controller IO queue size 128 less than required 00:07:19.922 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:19.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:19.922 Initialization complete. Launching workers. 00:07:19.922 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36297 00:07:19.922 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36358, failed to submit 62 00:07:19.922 success 36301, unsuccess 57, failed 0 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.922 rmmod nvme_tcp 00:07:19.922 rmmod nvme_fabrics 00:07:19.922 rmmod nvme_keyring 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 414628 ']' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 414628 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 414628 ']' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 414628 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 414628 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 414628' 00:07:19.922 killing process with pid 414628 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 414628 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 414628 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.922 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.466 00:07:22.466 real 0m8.014s 00:07:22.466 user 0m12.879s 00:07:22.466 sys 0m2.359s 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.466 ************************************ 00:07:22.466 END TEST nvmf_abort 00:07:22.466 ************************************ 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.466 ************************************ 00:07:22.466 START TEST nvmf_ns_hotplug_stress 00:07:22.466 ************************************ 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.466 * Looking for test storage... 00:07:22.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.466 09:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:24.367 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:24.367 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:24.367 Found net devices under 0000:82:00.0: cvl_0_0 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.367 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:24.368 Found net devices under 0000:82:00.1: cvl_0_1 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:24.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:07:24.368 00:07:24.368 --- 10.0.0.2 ping statistics --- 00:07:24.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.368 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:07:24.368 00:07:24.368 --- 10.0.0.1 ping statistics --- 00:07:24.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.368 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=416987 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 416987 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 416987 ']' 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.368 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.368 [2024-07-25 09:21:56.920600] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:24.368 [2024-07-25 09:21:56.920688] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.368 [2024-07-25 09:21:56.990871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.626 [2024-07-25 09:21:57.112102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.626 [2024-07-25 09:21:57.112162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.626 [2024-07-25 09:21:57.112178] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.626 [2024-07-25 09:21:57.112192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.626 [2024-07-25 09:21:57.112204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.626 [2024-07-25 09:21:57.112290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.626 [2024-07-25 09:21:57.112351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.626 [2024-07-25 09:21:57.112364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:25.191 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.449 [2024-07-25 09:21:58.112911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.449 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:25.706 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.964 [2024-07-25 09:21:58.623047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.964 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.221 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:26.478 Malloc0 00:07:26.478 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:26.736 Delay0 00:07:26.736 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.993 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:27.250 NULL1 00:07:27.250 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:27.508 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=417414 00:07:27.508 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:27.508 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.508 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:27.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.880 Read completed with error (sct=0, sc=11) 00:07:28.880 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.138 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:29.138 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:29.395 true 00:07:29.395 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:29.395 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.961 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.219 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:30.219 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:30.783 true 00:07:30.783 09:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:30.783 09:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.041 09:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.298 09:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:31.298 09:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:31.298 true 00:07:31.557 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:31.557 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.814 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.072 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:32.072 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:32.330 true 00:07:32.330 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:32.330 09:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.263 09:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.521 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:33.521 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:33.779 true 00:07:33.779 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:33.779 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.037 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.294 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:34.294 09:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:34.552 true 00:07:34.552 09:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:34.552 09:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.485 09:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.743 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:35.743 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:35.743 true 00:07:36.000 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:36.000 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.000 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.258 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:36.258 09:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:36.515 true 00:07:36.515 09:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:36.515 09:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.902 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.902 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:37.902 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:38.160 true 00:07:38.160 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:38.160 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.417 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.675 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:38.675 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:38.933 true 00:07:38.933 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:38.933 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.190 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.449 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:39.449 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:39.449 true 00:07:39.707 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:39.707 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.639 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.896 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:40.896 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:41.153 true 00:07:41.153 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:41.153 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.086 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.344 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:42.344 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:42.601 true 00:07:42.601 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:42.602 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.859 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.116 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:43.116 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:43.374 true 00:07:43.374 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:43.374 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.307 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.307 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:44.307 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:44.564 true 00:07:44.564 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:44.564 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.821 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.079 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:45.079 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:45.336 true 00:07:45.336 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:45.336 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.267 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.524 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:46.524 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:46.781 true 00:07:46.781 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:46.781 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.038 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.296 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:47.296 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:47.554 true 00:07:47.554 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:47.554 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.486 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.743 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:48.743 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:49.000 true 00:07:49.000 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:49.000 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.258 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.516 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:49.516 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:49.774 true 00:07:49.774 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:49.774 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.708 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.965 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:50.965 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:50.965 true 00:07:51.222 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:51.222 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.480 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.480 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:51.480 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:51.737 true 00:07:51.737 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:51.737 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.996 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.255 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:52.255 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:52.513 true 00:07:52.513 09:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:52.513 09:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.981 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:53.981 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:54.259 true 00:07:54.259 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:54.259 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.192 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.449 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:55.449 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:55.707 true 00:07:55.707 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:55.707 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.965 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.222 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:56.222 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:56.479 true 00:07:56.479 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:56.479 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.411 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.670 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:57.670 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:57.927 Initializing NVMe Controllers 00:07:57.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.927 Controller IO queue size 128, less than required. 00:07:57.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.927 Controller IO queue size 128, less than required. 00:07:57.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:57.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:57.927 Initialization complete. Launching workers. 00:07:57.927 ======================================================== 00:07:57.927 Latency(us) 00:07:57.927 Device Information : IOPS MiB/s Average min max 00:07:57.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1401.12 0.68 49490.13 2382.26 1048284.37 00:07:57.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11733.28 5.73 10909.94 2973.26 452962.38 00:07:57.927 ======================================================== 00:07:57.927 Total : 13134.40 6.41 15025.51 2382.26 1048284.37 00:07:57.927 00:07:57.927 true 00:07:57.927 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 417414 00:07:57.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (417414) - No such process 00:07:57.927 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 417414 00:07:57.927 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.184 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.442 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:58.442 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:58.442 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:58.442 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.442 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:58.699 null0 00:07:58.699 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.699 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.699 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:58.956 null1 00:07:58.956 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.956 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.956 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:59.214 null2 00:07:59.214 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.214 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.214 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:59.471 null3 00:07:59.471 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.471 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.471 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:59.730 null4 00:07:59.730 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.730 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.730 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:59.987 null5 00:07:59.987 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.987 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.987 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:59.987 null6 00:07:59.987 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.243 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.243 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:00.243 null7 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.499 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 421468 421469 421471 421473 421475 421477 421479 421481 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.500 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.756 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.012 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.013 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.270 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.528 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.043 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.301 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.559 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.816 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.074 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.332 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.590 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.848 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.106 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.364 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.622 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.879 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.879 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.879 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.879 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.880 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.880 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.880 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.880 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.138 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.395 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.395 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.395 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.395 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.395 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.653 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.653 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.653 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.653 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.653 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.653 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.910 rmmod nvme_tcp 00:08:05.910 rmmod nvme_fabrics 00:08:05.910 rmmod nvme_keyring 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.910 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 416987 ']' 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 416987 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 416987 ']' 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 416987 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 416987 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 416987' 00:08:05.911 killing process with pid 416987 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 416987 00:08:05.911 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 416987 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.178 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.710 00:08:08.710 real 0m46.213s 00:08:08.710 user 3m30.720s 00:08:08.710 sys 0m17.089s 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:08.710 ************************************ 00:08:08.710 END TEST nvmf_ns_hotplug_stress 00:08:08.710 ************************************ 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.710 ************************************ 00:08:08.710 START TEST nvmf_delete_subsystem 00:08:08.710 ************************************ 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.710 * Looking for test storage... 00:08:08.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.710 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.711 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:10.085 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:10.085 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.085 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:10.344 Found net devices under 0000:82:00.0: cvl_0_0 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:10.344 Found net devices under 0000:82:00.1: cvl_0_1 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:08:10.344 00:08:10.344 --- 10.0.0.2 ping statistics --- 00:08:10.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.344 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:10.344 00:08:10.344 --- 10.0.0.1 ping statistics --- 00:08:10.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.344 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.344 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=424228 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 424228 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 424228 ']' 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.345 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.345 [2024-07-25 09:22:43.032978] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:10.345 [2024-07-25 09:22:43.033068] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.345 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.603 [2024-07-25 09:22:43.098679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.603 [2024-07-25 09:22:43.208285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.603 [2024-07-25 09:22:43.208336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.603 [2024-07-25 09:22:43.208369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.603 [2024-07-25 09:22:43.208382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.603 [2024-07-25 09:22:43.208391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.603 [2024-07-25 09:22:43.208482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.603 [2024-07-25 09:22:43.208488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.603 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.603 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:10.603 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.603 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.603 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 [2024-07-25 09:22:43.359659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 [2024-07-25 09:22:43.375902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 NULL1 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 Delay0 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=424251 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:10.862 09:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:10.862 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.862 [2024-07-25 09:22:43.450515] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:12.759 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.759 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.759 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 [2024-07-25 09:22:45.590115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f287c00d330 is same with the state(5) to be set 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 Write completed with error (sct=0, sc=8) 00:08:13.017 Read completed with error (sct=0, sc=8) 00:08:13.017 starting I/O failed: -6 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 starting I/O failed: -6 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 starting I/O failed: -6 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 starting I/O failed: -6 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 starting I/O failed: -6 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 starting I/O failed: -6 00:08:13.018 [2024-07-25 09:22:45.590881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753e0 is same with the state(5) to be set 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Read completed with error (sct=0, sc=8) 00:08:13.018 Write completed with error (sct=0, sc=8) 00:08:13.951 [2024-07-25 09:22:46.547169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176ac0 is same with the state(5) to be set 00:08:13.951 Read completed with error (sct=0, sc=8) 00:08:13.951 Write completed with error (sct=0, sc=8) 00:08:13.951 Read completed with error (sct=0, sc=8) 00:08:13.951 Write completed with error (sct=0, sc=8) 00:08:13.951 Read completed with error (sct=0, sc=8) 00:08:13.951 Read completed with error (sct=0, sc=8) 00:08:13.951 Read completed with error (sct=0, sc=8) 00:08:13.951 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 [2024-07-25 09:22:46.592471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f287c00d000 is same with the state(5) to be set 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 [2024-07-25 09:22:46.592649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f287c00d660 is same with the state(5) to be set 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 [2024-07-25 09:22:46.593088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11755c0 is same with the state(5) to be set 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Write completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 Read completed with error (sct=0, sc=8) 00:08:13.952 [2024-07-25 09:22:46.593284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175c20 is same with the state(5) to be set 00:08:13.952 Initializing NVMe Controllers 00:08:13.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.952 Controller IO queue size 128, less than required. 00:08:13.952 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:13.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:13.952 Initialization complete. Launching workers. 00:08:13.952 ======================================================== 00:08:13.952 Latency(us) 00:08:13.952 Device Information : IOPS MiB/s Average min max 00:08:13.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.23 0.08 897239.36 417.50 1013197.30 00:08:13.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.79 0.08 916308.32 595.96 1012688.50 00:08:13.952 ======================================================== 00:08:13.952 Total : 330.02 0.16 906530.10 417.50 1013197.30 00:08:13.952 00:08:13.952 [2024-07-25 09:22:46.594238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1176ac0 (9): Bad file descriptor 00:08:13.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:13.952 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.952 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:13.952 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 424251 00:08:13.952 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 424251 00:08:14.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (424251) - No such process 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 424251 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 424251 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 424251 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.518 [2024-07-25 09:22:47.119039] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=424662 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.518 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:14.518 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.518 [2024-07-25 09:22:47.183778] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:15.084 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.084 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:15.084 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.649 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.649 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:15.649 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.907 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.907 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:15.907 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.472 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.472 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:16.472 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.037 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.037 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:17.037 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.602 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.602 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:17.602 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.860 Initializing NVMe Controllers 00:08:17.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.860 Controller IO queue size 128, less than required. 00:08:17.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:17.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:17.860 Initialization complete. Launching workers. 00:08:17.860 ======================================================== 00:08:17.860 Latency(us) 00:08:17.860 Device Information : IOPS MiB/s Average min max 00:08:17.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004704.01 1000176.81 1043268.20 00:08:17.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004034.83 1000158.19 1041812.08 00:08:17.860 ======================================================== 00:08:17.860 Total : 256.00 0.12 1004369.42 1000158.19 1043268.20 00:08:17.860 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 424662 00:08:18.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (424662) - No such process 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 424662 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.119 rmmod nvme_tcp 00:08:18.119 rmmod nvme_fabrics 00:08:18.119 rmmod nvme_keyring 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 424228 ']' 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 424228 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 424228 ']' 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 424228 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 424228 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 424228' 00:08:18.119 killing process with pid 424228 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 424228 00:08:18.119 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 424228 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.378 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.911 00:08:20.911 real 0m12.146s 00:08:20.911 user 0m27.625s 00:08:20.911 sys 0m2.950s 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.911 ************************************ 00:08:20.911 END TEST nvmf_delete_subsystem 00:08:20.911 ************************************ 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.911 ************************************ 00:08:20.911 START TEST nvmf_host_management 00:08:20.911 ************************************ 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.911 * Looking for test storage... 00:08:20.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.911 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.912 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.811 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:22.812 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:22.812 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:22.812 Found net devices under 0000:82:00.0: cvl_0_0 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:22.812 Found net devices under 0000:82:00.1: cvl_0_1 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:22.812 00:08:22.812 --- 10.0.0.2 ping statistics --- 00:08:22.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.812 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:22.812 00:08:22.812 --- 10.0.0.1 ping statistics --- 00:08:22.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.812 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=427004 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 427004 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 427004 ']' 00:08:22.812 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.813 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.813 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.813 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.813 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.813 [2024-07-25 09:22:55.330937] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:22.813 [2024-07-25 09:22:55.331022] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.813 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.813 [2024-07-25 09:22:55.397158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.813 [2024-07-25 09:22:55.507382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.813 [2024-07-25 09:22:55.507435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.813 [2024-07-25 09:22:55.507465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.813 [2024-07-25 09:22:55.507478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.813 [2024-07-25 09:22:55.507488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.813 [2024-07-25 09:22:55.507582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.813 [2024-07-25 09:22:55.507635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.813 [2024-07-25 09:22:55.507659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.813 [2024-07-25 09:22:55.507662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.071 [2024-07-25 09:22:55.658675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.071 Malloc0 00:08:23.071 [2024-07-25 09:22:55.717668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=427164 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 427164 /var/tmp/bdevperf.sock 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 427164 ']' 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.071 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:23.071 { 00:08:23.071 "params": { 00:08:23.071 "name": "Nvme$subsystem", 00:08:23.071 "trtype": "$TEST_TRANSPORT", 00:08:23.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.071 "adrfam": "ipv4", 00:08:23.071 "trsvcid": "$NVMF_PORT", 00:08:23.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.072 "hdgst": ${hdgst:-false}, 00:08:23.072 "ddgst": ${ddgst:-false} 00:08:23.072 }, 00:08:23.072 "method": "bdev_nvme_attach_controller" 00:08:23.072 } 00:08:23.072 EOF 00:08:23.072 )") 00:08:23.072 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:23.072 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:23.072 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:23.072 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:23.072 "params": { 00:08:23.072 "name": "Nvme0", 00:08:23.072 "trtype": "tcp", 00:08:23.072 "traddr": "10.0.0.2", 00:08:23.072 "adrfam": "ipv4", 00:08:23.072 "trsvcid": "4420", 00:08:23.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.072 "hdgst": false, 00:08:23.072 "ddgst": false 00:08:23.072 }, 00:08:23.072 "method": "bdev_nvme_attach_controller" 00:08:23.072 }' 00:08:23.072 [2024-07-25 09:22:55.788417] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:23.072 [2024-07-25 09:22:55.788496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427164 ] 00:08:23.330 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.330 [2024-07-25 09:22:55.849835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.330 [2024-07-25 09:22:55.959094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.588 Running I/O for 10 seconds... 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:23.588 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=546 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 546 -ge 100 ']' 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.846 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.106 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.106 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.106 [2024-07-25 09:22:56.585319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.106 [2024-07-25 09:22:56.585378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.585399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.106 [2024-07-25 09:22:56.585414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 [2024-07-25 09:22:56.585428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.106 [2024-07-25 09:22:56.585442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.585456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.106 [2024-07-25 09:22:56.585470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.585483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x592790 is same with the state(5) to be set 00:08:24.106 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.106 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:24.106 [2024-07-25 09:22:56.594303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.594984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.594997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.595013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.595027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.595042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.595056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.595071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.595085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.595100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.595117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.595133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.595147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.106 [2024-07-25 09:22:56.595162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.106 [2024-07-25 09:22:56.595175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.595979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.595992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.107 [2024-07-25 09:22:56.596256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.107 [2024-07-25 09:22:56.596346] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9a35a0 was disconnected and freed. reset controller. 00:08:24.107 [2024-07-25 09:22:56.596420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x592790 (9): Bad file descriptor 00:08:24.107 [2024-07-25 09:22:56.597509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:24.107 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:24.107 00:08:24.107 Latency(us) 00:08:24.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.108 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:24.108 Verification LBA range: start 0x0 length 0x400 00:08:24.108 Nvme0n1 : 0.41 1549.01 96.81 154.90 0.00 36504.83 2463.67 34952.53 00:08:24.108 =================================================================================================================== 00:08:24.108 Total : 1549.01 96.81 154.90 0.00 36504.83 2463.67 34952.53 00:08:24.108 [2024-07-25 09:22:56.599402] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.108 [2024-07-25 09:22:56.643691] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 427164 00:08:25.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (427164) - No such process 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.041 { 00:08:25.041 "params": { 00:08:25.041 "name": "Nvme$subsystem", 00:08:25.041 "trtype": "$TEST_TRANSPORT", 00:08:25.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.041 "adrfam": "ipv4", 00:08:25.041 "trsvcid": "$NVMF_PORT", 00:08:25.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.041 "hdgst": ${hdgst:-false}, 00:08:25.041 "ddgst": ${ddgst:-false} 00:08:25.041 }, 00:08:25.041 "method": "bdev_nvme_attach_controller" 00:08:25.041 } 00:08:25.041 EOF 00:08:25.041 )") 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:25.041 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.041 "params": { 00:08:25.041 "name": "Nvme0", 00:08:25.041 "trtype": "tcp", 00:08:25.041 "traddr": "10.0.0.2", 00:08:25.041 "adrfam": "ipv4", 00:08:25.041 "trsvcid": "4420", 00:08:25.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.041 "hdgst": false, 00:08:25.041 "ddgst": false 00:08:25.041 }, 00:08:25.041 "method": "bdev_nvme_attach_controller" 00:08:25.041 }' 00:08:25.042 [2024-07-25 09:22:57.642027] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:25.042 [2024-07-25 09:22:57.642123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427334 ] 00:08:25.042 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.042 [2024-07-25 09:22:57.705758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.299 [2024-07-25 09:22:57.818488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.557 Running I/O for 1 seconds... 00:08:26.490 00:08:26.490 Latency(us) 00:08:26.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.490 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:26.490 Verification LBA range: start 0x0 length 0x400 00:08:26.490 Nvme0n1 : 1.01 1649.81 103.11 0.00 0.00 38162.88 5315.70 33593.27 00:08:26.490 =================================================================================================================== 00:08:26.490 Total : 1649.81 103.11 0.00 0.00 38162.88 5315.70 33593.27 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.748 rmmod nvme_tcp 00:08:26.748 rmmod nvme_fabrics 00:08:26.748 rmmod nvme_keyring 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 427004 ']' 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 427004 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 427004 ']' 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 427004 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 427004 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:26.748 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:26.749 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 427004' 00:08:26.749 killing process with pid 427004 00:08:26.749 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 427004 00:08:26.749 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 427004 00:08:27.007 [2024-07-25 09:22:59.679896] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.007 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:29.541 00:08:29.541 real 0m8.663s 00:08:29.541 user 0m19.649s 00:08:29.541 sys 0m2.658s 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.541 ************************************ 00:08:29.541 END TEST nvmf_host_management 00:08:29.541 ************************************ 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.541 ************************************ 00:08:29.541 START TEST nvmf_lvol 00:08:29.541 ************************************ 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.541 * Looking for test storage... 00:08:29.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.541 09:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.443 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:31.444 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:31.444 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:31.444 Found net devices under 0000:82:00.0: cvl_0_0 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:31.444 Found net devices under 0000:82:00.1: cvl_0_1 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.444 09:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:08:31.444 00:08:31.444 --- 10.0.0.2 ping statistics --- 00:08:31.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.444 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:31.444 00:08:31.444 --- 10.0.0.1 ping statistics --- 00:08:31.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.444 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=429526 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 429526 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 429526 ']' 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.444 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.444 [2024-07-25 09:23:04.138680] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:31.444 [2024-07-25 09:23:04.138757] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.444 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.702 [2024-07-25 09:23:04.203191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.702 [2024-07-25 09:23:04.313072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.702 [2024-07-25 09:23:04.313122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.702 [2024-07-25 09:23:04.313151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.702 [2024-07-25 09:23:04.313163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.702 [2024-07-25 09:23:04.313173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.702 [2024-07-25 09:23:04.313257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.702 [2024-07-25 09:23:04.313322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.702 [2024-07-25 09:23:04.313325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.702 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.702 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:31.702 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.702 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.702 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.960 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.217 [2024-07-25 09:23:04.729764] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.217 09:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.474 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.474 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.732 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:32.732 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:32.989 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.248 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cfa55031-1172-4e98-b308-f3084e5c37ae 00:08:33.248 09:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cfa55031-1172-4e98-b308-f3084e5c37ae lvol 20 00:08:33.505 09:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fdc0a00f-9893-42c1-9b9e-296faaddcbee 00:08:33.505 09:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.762 09:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdc0a00f-9893-42c1-9b9e-296faaddcbee 00:08:34.019 09:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.276 [2024-07-25 09:23:06.835874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.276 09:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.533 09:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=429951 00:08:34.533 09:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:34.533 09:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:34.533 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.469 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fdc0a00f-9893-42c1-9b9e-296faaddcbee MY_SNAPSHOT 00:08:35.728 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ddbf8f60-ba0a-469d-99d1-7b5ddd2bc249 00:08:35.728 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fdc0a00f-9893-42c1-9b9e-296faaddcbee 30 00:08:36.293 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ddbf8f60-ba0a-469d-99d1-7b5ddd2bc249 MY_CLONE 00:08:36.551 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=40cff2ff-19f3-49bc-8675-229a32751f25 00:08:36.551 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 40cff2ff-19f3-49bc-8675-229a32751f25 00:08:37.115 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 429951 00:08:45.218 Initializing NVMe Controllers 00:08:45.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.218 Controller IO queue size 128, less than required. 00:08:45.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:45.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:45.218 Initialization complete. Launching workers. 00:08:45.218 ======================================================== 00:08:45.218 Latency(us) 00:08:45.218 Device Information : IOPS MiB/s Average min max 00:08:45.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10706.20 41.82 11958.49 536.32 60431.72 00:08:45.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10628.80 41.52 12046.26 2272.26 78611.56 00:08:45.218 ======================================================== 00:08:45.218 Total : 21335.00 83.34 12002.21 536.32 78611.56 00:08:45.218 00:08:45.218 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.218 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fdc0a00f-9893-42c1-9b9e-296faaddcbee 00:08:45.475 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cfa55031-1172-4e98-b308-f3084e5c37ae 00:08:45.732 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:45.732 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.733 rmmod nvme_tcp 00:08:45.733 rmmod nvme_fabrics 00:08:45.733 rmmod nvme_keyring 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 429526 ']' 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 429526 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 429526 ']' 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 429526 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 429526 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 429526' 00:08:45.733 killing process with pid 429526 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 429526 00:08:45.733 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 429526 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.990 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.525 00:08:48.525 real 0m18.915s 00:08:48.525 user 1m4.549s 00:08:48.525 sys 0m5.708s 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.525 ************************************ 00:08:48.525 END TEST nvmf_lvol 00:08:48.525 ************************************ 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.525 ************************************ 00:08:48.525 START TEST nvmf_lvs_grow 00:08:48.525 ************************************ 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.525 * Looking for test storage... 00:08:48.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.525 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.526 09:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.427 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:50.428 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:50.428 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:50.428 Found net devices under 0000:82:00.0: cvl_0_0 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:50.428 Found net devices under 0000:82:00.1: cvl_0_1 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:08:50.428 00:08:50.428 --- 10.0.0.2 ping statistics --- 00:08:50.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.428 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:50.428 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:08:50.428 00:08:50.428 --- 10.0.0.1 ping statistics --- 00:08:50.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.428 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=433220 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 433220 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 433220 ']' 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.428 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.428 [2024-07-25 09:23:23.080635] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:50.428 [2024-07-25 09:23:23.080722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.428 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.428 [2024-07-25 09:23:23.149895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.686 [2024-07-25 09:23:23.264805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.686 [2024-07-25 09:23:23.264874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.686 [2024-07-25 09:23:23.264888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.686 [2024-07-25 09:23:23.264906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.686 [2024-07-25 09:23:23.264917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.686 [2024-07-25 09:23:23.264961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.618 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.618 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:51.619 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.619 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.619 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.619 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.619 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.876 [2024-07-25 09:23:24.365444] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.876 ************************************ 00:08:51.876 START TEST lvs_grow_clean 00:08:51.876 ************************************ 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.876 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.134 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:52.134 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:52.392 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:08:52.392 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:08:52.392 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:52.649 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:52.649 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:52.649 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 lvol 150 00:08:52.907 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7c11ea2-8106-4ee4-851a-a88ce0bc051f 00:08:52.907 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.907 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:53.165 [2024-07-25 09:23:25.747655] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:53.165 [2024-07-25 09:23:25.747779] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:53.165 true 00:08:53.165 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:08:53.165 09:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:53.423 09:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:53.423 09:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.682 09:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7c11ea2-8106-4ee4-851a-a88ce0bc051f 00:08:53.940 09:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.197 [2024-07-25 09:23:26.838964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.198 09:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=433782 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 433782 /var/tmp/bdevperf.sock 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 433782 ']' 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.456 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:54.456 [2024-07-25 09:23:27.139893] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:54.456 [2024-07-25 09:23:27.139974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433782 ] 00:08:54.456 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.714 [2024-07-25 09:23:27.201938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.714 [2024-07-25 09:23:27.317888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.714 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.714 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:54.714 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:55.280 Nvme0n1 00:08:55.280 09:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:55.538 [ 00:08:55.538 { 00:08:55.538 "name": "Nvme0n1", 00:08:55.538 "aliases": [ 00:08:55.538 "e7c11ea2-8106-4ee4-851a-a88ce0bc051f" 00:08:55.538 ], 00:08:55.538 "product_name": "NVMe disk", 00:08:55.538 "block_size": 4096, 00:08:55.538 "num_blocks": 38912, 00:08:55.538 "uuid": "e7c11ea2-8106-4ee4-851a-a88ce0bc051f", 00:08:55.538 "assigned_rate_limits": { 00:08:55.538 "rw_ios_per_sec": 0, 00:08:55.538 "rw_mbytes_per_sec": 0, 00:08:55.538 "r_mbytes_per_sec": 0, 00:08:55.538 "w_mbytes_per_sec": 0 00:08:55.538 }, 00:08:55.538 "claimed": false, 00:08:55.538 "zoned": false, 00:08:55.538 "supported_io_types": { 00:08:55.538 "read": true, 00:08:55.538 "write": true, 00:08:55.538 "unmap": true, 00:08:55.538 "flush": true, 00:08:55.538 "reset": true, 00:08:55.538 "nvme_admin": true, 00:08:55.538 "nvme_io": true, 00:08:55.538 "nvme_io_md": false, 00:08:55.538 "write_zeroes": true, 00:08:55.538 "zcopy": false, 00:08:55.538 "get_zone_info": false, 00:08:55.538 "zone_management": false, 00:08:55.538 "zone_append": false, 00:08:55.538 "compare": true, 00:08:55.538 "compare_and_write": true, 00:08:55.538 "abort": true, 00:08:55.538 "seek_hole": false, 00:08:55.538 "seek_data": false, 00:08:55.538 "copy": true, 00:08:55.538 "nvme_iov_md": false 00:08:55.538 }, 00:08:55.538 "memory_domains": [ 00:08:55.538 { 00:08:55.538 "dma_device_id": "system", 00:08:55.538 "dma_device_type": 1 00:08:55.538 } 00:08:55.538 ], 00:08:55.538 "driver_specific": { 00:08:55.538 "nvme": [ 00:08:55.538 { 00:08:55.538 "trid": { 00:08:55.538 "trtype": "TCP", 00:08:55.538 "adrfam": "IPv4", 00:08:55.538 "traddr": "10.0.0.2", 00:08:55.538 "trsvcid": "4420", 00:08:55.538 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:55.538 }, 00:08:55.538 "ctrlr_data": { 00:08:55.538 "cntlid": 1, 00:08:55.538 "vendor_id": "0x8086", 00:08:55.538 "model_number": "SPDK bdev Controller", 00:08:55.538 "serial_number": "SPDK0", 00:08:55.538 "firmware_revision": "24.09", 00:08:55.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.538 "oacs": { 00:08:55.538 "security": 0, 00:08:55.538 "format": 0, 00:08:55.538 "firmware": 0, 00:08:55.538 "ns_manage": 0 00:08:55.538 }, 00:08:55.538 "multi_ctrlr": true, 00:08:55.538 "ana_reporting": false 00:08:55.538 }, 00:08:55.538 "vs": { 00:08:55.538 "nvme_version": "1.3" 00:08:55.538 }, 00:08:55.538 "ns_data": { 00:08:55.538 "id": 1, 00:08:55.538 "can_share": true 00:08:55.538 } 00:08:55.538 } 00:08:55.538 ], 00:08:55.538 "mp_policy": "active_passive" 00:08:55.538 } 00:08:55.538 } 00:08:55.538 ] 00:08:55.538 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=433821 00:08:55.538 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:55.538 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.538 Running I/O for 10 seconds... 00:08:56.473 Latency(us) 00:08:56.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.473 Nvme0n1 : 1.00 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:08:56.473 =================================================================================================================== 00:08:56.473 Total : 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:08:56.473 00:08:57.406 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:08:57.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.663 Nvme0n1 : 2.00 15886.50 62.06 0.00 0.00 0.00 0.00 0.00 00:08:57.663 =================================================================================================================== 00:08:57.663 Total : 15886.50 62.06 0.00 0.00 0.00 0.00 0.00 00:08:57.663 00:08:57.663 true 00:08:57.663 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:08:57.663 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.921 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.921 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.921 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 433821 00:08:58.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.485 Nvme0n1 : 3.00 16335.00 63.81 0.00 0.00 0.00 0.00 0.00 00:08:58.485 =================================================================================================================== 00:08:58.485 Total : 16335.00 63.81 0.00 0.00 0.00 0.00 0.00 00:08:58.485 00:08:59.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.858 Nvme0n1 : 4.00 16625.25 64.94 0.00 0.00 0.00 0.00 0.00 00:08:59.858 =================================================================================================================== 00:08:59.858 Total : 16625.25 64.94 0.00 0.00 0.00 0.00 0.00 00:08:59.858 00:09:00.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.831 Nvme0n1 : 5.00 16806.20 65.65 0.00 0.00 0.00 0.00 0.00 00:09:00.831 =================================================================================================================== 00:09:00.831 Total : 16806.20 65.65 0.00 0.00 0.00 0.00 0.00 00:09:00.831 00:09:01.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.822 Nvme0n1 : 6.00 16534.83 64.59 0.00 0.00 0.00 0.00 0.00 00:09:01.822 =================================================================================================================== 00:09:01.822 Total : 16534.83 64.59 0.00 0.00 0.00 0.00 0.00 00:09:01.822 00:09:02.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.756 Nvme0n1 : 7.00 16341.00 63.83 0.00 0.00 0.00 0.00 0.00 00:09:02.756 =================================================================================================================== 00:09:02.756 Total : 16341.00 63.83 0.00 0.00 0.00 0.00 0.00 00:09:02.756 00:09:03.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.690 Nvme0n1 : 8.00 16379.62 63.98 0.00 0.00 0.00 0.00 0.00 00:09:03.690 =================================================================================================================== 00:09:03.690 Total : 16379.62 63.98 0.00 0.00 0.00 0.00 0.00 00:09:03.690 00:09:04.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.625 Nvme0n1 : 9.00 16411.44 64.11 0.00 0.00 0.00 0.00 0.00 00:09:04.625 =================================================================================================================== 00:09:04.625 Total : 16411.44 64.11 0.00 0.00 0.00 0.00 0.00 00:09:04.625 00:09:05.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.560 Nvme0n1 : 10.00 16422.70 64.15 0.00 0.00 0.00 0.00 0.00 00:09:05.560 =================================================================================================================== 00:09:05.560 Total : 16422.70 64.15 0.00 0.00 0.00 0.00 0.00 00:09:05.560 00:09:05.560 00:09:05.560 Latency(us) 00:09:05.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.560 Nvme0n1 : 10.01 16425.79 64.16 0.00 0.00 7788.08 2997.67 16505.36 00:09:05.560 =================================================================================================================== 00:09:05.560 Total : 16425.79 64.16 0.00 0.00 7788.08 2997.67 16505.36 00:09:05.560 0 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 433782 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 433782 ']' 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 433782 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 433782 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 433782' 00:09:05.560 killing process with pid 433782 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 433782 00:09:05.560 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.560 00:09:05.560 Latency(us) 00:09:05.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.560 =================================================================================================================== 00:09:05.560 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.560 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 433782 00:09:06.125 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.125 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.383 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:06.383 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.641 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.641 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:06.641 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.898 [2024-07-25 09:23:39.603252] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:07.156 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:07.413 request: 00:09:07.413 { 00:09:07.413 "uuid": "107b72e9-197d-48ba-8c58-b8f3f12b7db3", 00:09:07.413 "method": "bdev_lvol_get_lvstores", 00:09:07.413 "req_id": 1 00:09:07.413 } 00:09:07.413 Got JSON-RPC error response 00:09:07.413 response: 00:09:07.413 { 00:09:07.413 "code": -19, 00:09:07.413 "message": "No such device" 00:09:07.414 } 00:09:07.414 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:07.414 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:07.414 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:07.414 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:07.414 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.671 aio_bdev 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7c11ea2-8106-4ee4-851a-a88ce0bc051f 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=e7c11ea2-8106-4ee4-851a-a88ce0bc051f 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:07.671 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.937 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7c11ea2-8106-4ee4-851a-a88ce0bc051f -t 2000 00:09:08.195 [ 00:09:08.195 { 00:09:08.195 "name": "e7c11ea2-8106-4ee4-851a-a88ce0bc051f", 00:09:08.195 "aliases": [ 00:09:08.195 "lvs/lvol" 00:09:08.195 ], 00:09:08.195 "product_name": "Logical Volume", 00:09:08.195 "block_size": 4096, 00:09:08.195 "num_blocks": 38912, 00:09:08.195 "uuid": "e7c11ea2-8106-4ee4-851a-a88ce0bc051f", 00:09:08.195 "assigned_rate_limits": { 00:09:08.195 "rw_ios_per_sec": 0, 00:09:08.195 "rw_mbytes_per_sec": 0, 00:09:08.195 "r_mbytes_per_sec": 0, 00:09:08.195 "w_mbytes_per_sec": 0 00:09:08.195 }, 00:09:08.195 "claimed": false, 00:09:08.195 "zoned": false, 00:09:08.195 "supported_io_types": { 00:09:08.195 "read": true, 00:09:08.195 "write": true, 00:09:08.195 "unmap": true, 00:09:08.195 "flush": false, 00:09:08.195 "reset": true, 00:09:08.195 "nvme_admin": false, 00:09:08.195 "nvme_io": false, 00:09:08.195 "nvme_io_md": false, 00:09:08.195 "write_zeroes": true, 00:09:08.195 "zcopy": false, 00:09:08.195 "get_zone_info": false, 00:09:08.195 "zone_management": false, 00:09:08.195 "zone_append": false, 00:09:08.195 "compare": false, 00:09:08.195 "compare_and_write": false, 00:09:08.195 "abort": false, 00:09:08.195 "seek_hole": true, 00:09:08.195 "seek_data": true, 00:09:08.195 "copy": false, 00:09:08.195 "nvme_iov_md": false 00:09:08.195 }, 00:09:08.195 "driver_specific": { 00:09:08.195 "lvol": { 00:09:08.195 "lvol_store_uuid": "107b72e9-197d-48ba-8c58-b8f3f12b7db3", 00:09:08.195 "base_bdev": "aio_bdev", 00:09:08.195 "thin_provision": false, 00:09:08.195 "num_allocated_clusters": 38, 00:09:08.195 "snapshot": false, 00:09:08.195 "clone": false, 00:09:08.195 "esnap_clone": false 00:09:08.195 } 00:09:08.195 } 00:09:08.195 } 00:09:08.195 ] 00:09:08.195 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:08.195 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:08.195 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:08.453 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:08.453 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:08.453 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:08.710 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:08.710 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7c11ea2-8106-4ee4-851a-a88ce0bc051f 00:09:08.968 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 107b72e9-197d-48ba-8c58-b8f3f12b7db3 00:09:09.227 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.227 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.501 00:09:09.501 real 0m17.567s 00:09:09.501 user 0m17.000s 00:09:09.501 sys 0m1.918s 00:09:09.501 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.501 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.501 ************************************ 00:09:09.501 END TEST lvs_grow_clean 00:09:09.501 ************************************ 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.501 ************************************ 00:09:09.501 START TEST lvs_grow_dirty 00:09:09.501 ************************************ 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.501 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.761 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.762 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.019 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:10.019 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:10.019 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.277 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.277 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.277 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a lvol 150 00:09:10.535 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=01620f9d-1957-4805-9050-0795b7c0b188 00:09:10.535 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.535 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.793 [2024-07-25 09:23:43.332807] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.793 [2024-07-25 09:23:43.332900] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.793 true 00:09:10.793 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.793 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:11.049 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:11.049 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.306 09:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01620f9d-1957-4805-9050-0795b7c0b188 00:09:11.564 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.822 [2024-07-25 09:23:44.380045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.822 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=435860 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 435860 /var/tmp/bdevperf.sock 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 435860 ']' 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:12.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.080 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.080 [2024-07-25 09:23:44.687835] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:12.080 [2024-07-25 09:23:44.687910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435860 ] 00:09:12.081 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.081 [2024-07-25 09:23:44.748463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.338 [2024-07-25 09:23:44.857923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.338 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.338 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:12.339 09:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.904 Nvme0n1 00:09:12.904 09:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:13.162 [ 00:09:13.162 { 00:09:13.162 "name": "Nvme0n1", 00:09:13.162 "aliases": [ 00:09:13.162 "01620f9d-1957-4805-9050-0795b7c0b188" 00:09:13.162 ], 00:09:13.162 "product_name": "NVMe disk", 00:09:13.162 "block_size": 4096, 00:09:13.162 "num_blocks": 38912, 00:09:13.162 "uuid": "01620f9d-1957-4805-9050-0795b7c0b188", 00:09:13.162 "assigned_rate_limits": { 00:09:13.162 "rw_ios_per_sec": 0, 00:09:13.162 "rw_mbytes_per_sec": 0, 00:09:13.162 "r_mbytes_per_sec": 0, 00:09:13.162 "w_mbytes_per_sec": 0 00:09:13.162 }, 00:09:13.162 "claimed": false, 00:09:13.162 "zoned": false, 00:09:13.162 "supported_io_types": { 00:09:13.162 "read": true, 00:09:13.162 "write": true, 00:09:13.162 "unmap": true, 00:09:13.162 "flush": true, 00:09:13.162 "reset": true, 00:09:13.162 "nvme_admin": true, 00:09:13.162 "nvme_io": true, 00:09:13.162 "nvme_io_md": false, 00:09:13.162 "write_zeroes": true, 00:09:13.162 "zcopy": false, 00:09:13.162 "get_zone_info": false, 00:09:13.162 "zone_management": false, 00:09:13.162 "zone_append": false, 00:09:13.162 "compare": true, 00:09:13.162 "compare_and_write": true, 00:09:13.162 "abort": true, 00:09:13.162 "seek_hole": false, 00:09:13.162 "seek_data": false, 00:09:13.162 "copy": true, 00:09:13.162 "nvme_iov_md": false 00:09:13.162 }, 00:09:13.162 "memory_domains": [ 00:09:13.162 { 00:09:13.162 "dma_device_id": "system", 00:09:13.162 "dma_device_type": 1 00:09:13.162 } 00:09:13.162 ], 00:09:13.162 "driver_specific": { 00:09:13.162 "nvme": [ 00:09:13.162 { 00:09:13.162 "trid": { 00:09:13.162 "trtype": "TCP", 00:09:13.162 "adrfam": "IPv4", 00:09:13.162 "traddr": "10.0.0.2", 00:09:13.162 "trsvcid": "4420", 00:09:13.162 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:13.162 }, 00:09:13.162 "ctrlr_data": { 00:09:13.162 "cntlid": 1, 00:09:13.162 "vendor_id": "0x8086", 00:09:13.162 "model_number": "SPDK bdev Controller", 00:09:13.162 "serial_number": "SPDK0", 00:09:13.162 "firmware_revision": "24.09", 00:09:13.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:13.162 "oacs": { 00:09:13.162 "security": 0, 00:09:13.162 "format": 0, 00:09:13.162 "firmware": 0, 00:09:13.162 "ns_manage": 0 00:09:13.162 }, 00:09:13.162 "multi_ctrlr": true, 00:09:13.162 "ana_reporting": false 00:09:13.162 }, 00:09:13.162 "vs": { 00:09:13.162 "nvme_version": "1.3" 00:09:13.162 }, 00:09:13.162 "ns_data": { 00:09:13.162 "id": 1, 00:09:13.162 "can_share": true 00:09:13.162 } 00:09:13.162 } 00:09:13.162 ], 00:09:13.162 "mp_policy": "active_passive" 00:09:13.162 } 00:09:13.162 } 00:09:13.162 ] 00:09:13.162 09:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=435998 00:09:13.162 09:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:13.162 09:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:13.162 Running I/O for 10 seconds... 00:09:14.537 Latency(us) 00:09:14.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.537 Nvme0n1 : 1.00 14556.00 56.86 0.00 0.00 0.00 0.00 0.00 00:09:14.537 =================================================================================================================== 00:09:14.537 Total : 14556.00 56.86 0.00 0.00 0.00 0.00 0.00 00:09:14.537 00:09:15.103 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:15.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.361 Nvme0n1 : 2.00 15343.00 59.93 0.00 0.00 0.00 0.00 0.00 00:09:15.361 =================================================================================================================== 00:09:15.361 Total : 15343.00 59.93 0.00 0.00 0.00 0.00 0.00 00:09:15.361 00:09:15.361 true 00:09:15.361 09:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:15.361 09:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.620 09:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.620 09:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.620 09:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 435998 00:09:16.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.185 Nvme0n1 : 3.00 15583.00 60.87 0.00 0.00 0.00 0.00 0.00 00:09:16.185 =================================================================================================================== 00:09:16.185 Total : 15583.00 60.87 0.00 0.00 0.00 0.00 0.00 00:09:16.185 00:09:17.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.558 Nvme0n1 : 4.00 15958.50 62.34 0.00 0.00 0.00 0.00 0.00 00:09:17.558 =================================================================================================================== 00:09:17.558 Total : 15958.50 62.34 0.00 0.00 0.00 0.00 0.00 00:09:17.558 00:09:18.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.492 Nvme0n1 : 5.00 15980.60 62.42 0.00 0.00 0.00 0.00 0.00 00:09:18.492 =================================================================================================================== 00:09:18.492 Total : 15980.60 62.42 0.00 0.00 0.00 0.00 0.00 00:09:18.492 00:09:19.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.425 Nvme0n1 : 6.00 16139.50 63.04 0.00 0.00 0.00 0.00 0.00 00:09:19.425 =================================================================================================================== 00:09:19.425 Total : 16139.50 63.04 0.00 0.00 0.00 0.00 0.00 00:09:19.425 00:09:20.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.359 Nvme0n1 : 7.00 16270.57 63.56 0.00 0.00 0.00 0.00 0.00 00:09:20.359 =================================================================================================================== 00:09:20.359 Total : 16270.57 63.56 0.00 0.00 0.00 0.00 0.00 00:09:20.359 00:09:21.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.291 Nvme0n1 : 8.00 16348.12 63.86 0.00 0.00 0.00 0.00 0.00 00:09:21.291 =================================================================================================================== 00:09:21.291 Total : 16348.12 63.86 0.00 0.00 0.00 0.00 0.00 00:09:21.291 00:09:22.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.224 Nvme0n1 : 9.00 16225.67 63.38 0.00 0.00 0.00 0.00 0.00 00:09:22.224 =================================================================================================================== 00:09:22.224 Total : 16225.67 63.38 0.00 0.00 0.00 0.00 0.00 00:09:22.224 00:09:23.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.158 Nvme0n1 : 10.00 16114.40 62.95 0.00 0.00 0.00 0.00 0.00 00:09:23.158 =================================================================================================================== 00:09:23.158 Total : 16114.40 62.95 0.00 0.00 0.00 0.00 0.00 00:09:23.158 00:09:23.158 00:09:23.158 Latency(us) 00:09:23.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.158 Nvme0n1 : 10.00 16119.97 62.97 0.00 0.00 7936.14 2924.85 16408.27 00:09:23.158 =================================================================================================================== 00:09:23.158 Total : 16119.97 62.97 0.00 0.00 7936.14 2924.85 16408.27 00:09:23.158 0 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 435860 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 435860 ']' 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 435860 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 435860 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 435860' 00:09:23.416 killing process with pid 435860 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 435860 00:09:23.416 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.416 00:09:23.416 Latency(us) 00:09:23.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.416 =================================================================================================================== 00:09:23.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.416 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 435860 00:09:23.674 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.932 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.189 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:24.189 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.447 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.447 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:24.447 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 433220 00:09:24.447 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 433220 00:09:24.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 433220 Killed "${NVMF_APP[@]}" "$@" 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=437324 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 437324 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 437324 ']' 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.448 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.448 [2024-07-25 09:23:57.058722] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:24.448 [2024-07-25 09:23:57.058810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.448 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.448 [2024-07-25 09:23:57.123706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.706 [2024-07-25 09:23:57.232276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.706 [2024-07-25 09:23:57.232330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.706 [2024-07-25 09:23:57.232372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.706 [2024-07-25 09:23:57.232396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.706 [2024-07-25 09:23:57.232422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.706 [2024-07-25 09:23:57.232448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.706 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.964 [2024-07-25 09:23:57.633083] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.964 [2024-07-25 09:23:57.633218] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.964 [2024-07-25 09:23:57.633270] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 01620f9d-1957-4805-9050-0795b7c0b188 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=01620f9d-1957-4805-9050-0795b7c0b188 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:24.964 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.222 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01620f9d-1957-4805-9050-0795b7c0b188 -t 2000 00:09:25.480 [ 00:09:25.480 { 00:09:25.480 "name": "01620f9d-1957-4805-9050-0795b7c0b188", 00:09:25.480 "aliases": [ 00:09:25.480 "lvs/lvol" 00:09:25.480 ], 00:09:25.480 "product_name": "Logical Volume", 00:09:25.480 "block_size": 4096, 00:09:25.480 "num_blocks": 38912, 00:09:25.480 "uuid": "01620f9d-1957-4805-9050-0795b7c0b188", 00:09:25.480 "assigned_rate_limits": { 00:09:25.480 "rw_ios_per_sec": 0, 00:09:25.480 "rw_mbytes_per_sec": 0, 00:09:25.480 "r_mbytes_per_sec": 0, 00:09:25.480 "w_mbytes_per_sec": 0 00:09:25.480 }, 00:09:25.480 "claimed": false, 00:09:25.480 "zoned": false, 00:09:25.480 "supported_io_types": { 00:09:25.480 "read": true, 00:09:25.480 "write": true, 00:09:25.480 "unmap": true, 00:09:25.480 "flush": false, 00:09:25.480 "reset": true, 00:09:25.480 "nvme_admin": false, 00:09:25.480 "nvme_io": false, 00:09:25.480 "nvme_io_md": false, 00:09:25.480 "write_zeroes": true, 00:09:25.480 "zcopy": false, 00:09:25.480 "get_zone_info": false, 00:09:25.480 "zone_management": false, 00:09:25.480 "zone_append": false, 00:09:25.480 "compare": false, 00:09:25.480 "compare_and_write": false, 00:09:25.480 "abort": false, 00:09:25.480 "seek_hole": true, 00:09:25.480 "seek_data": true, 00:09:25.480 "copy": false, 00:09:25.480 "nvme_iov_md": false 00:09:25.480 }, 00:09:25.480 "driver_specific": { 00:09:25.480 "lvol": { 00:09:25.480 "lvol_store_uuid": "8b89956e-e6b7-4709-b87c-0e59a15ccb1a", 00:09:25.480 "base_bdev": "aio_bdev", 00:09:25.480 "thin_provision": false, 00:09:25.480 "num_allocated_clusters": 38, 00:09:25.480 "snapshot": false, 00:09:25.480 "clone": false, 00:09:25.480 "esnap_clone": false 00:09:25.480 } 00:09:25.480 } 00:09:25.480 } 00:09:25.480 ] 00:09:25.480 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:25.480 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:25.480 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:25.738 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:25.738 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:25.738 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:26.304 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:26.304 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.304 [2024-07-25 09:23:59.014472] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.562 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:26.820 request: 00:09:26.820 { 00:09:26.820 "uuid": "8b89956e-e6b7-4709-b87c-0e59a15ccb1a", 00:09:26.820 "method": "bdev_lvol_get_lvstores", 00:09:26.820 "req_id": 1 00:09:26.820 } 00:09:26.820 Got JSON-RPC error response 00:09:26.820 response: 00:09:26.820 { 00:09:26.820 "code": -19, 00:09:26.820 "message": "No such device" 00:09:26.820 } 00:09:26.820 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:26.820 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.820 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.820 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.820 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.078 aio_bdev 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 01620f9d-1957-4805-9050-0795b7c0b188 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=01620f9d-1957-4805-9050-0795b7c0b188 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:27.078 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.336 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01620f9d-1957-4805-9050-0795b7c0b188 -t 2000 00:09:27.336 [ 00:09:27.336 { 00:09:27.336 "name": "01620f9d-1957-4805-9050-0795b7c0b188", 00:09:27.336 "aliases": [ 00:09:27.336 "lvs/lvol" 00:09:27.336 ], 00:09:27.336 "product_name": "Logical Volume", 00:09:27.336 "block_size": 4096, 00:09:27.336 "num_blocks": 38912, 00:09:27.336 "uuid": "01620f9d-1957-4805-9050-0795b7c0b188", 00:09:27.336 "assigned_rate_limits": { 00:09:27.336 "rw_ios_per_sec": 0, 00:09:27.336 "rw_mbytes_per_sec": 0, 00:09:27.336 "r_mbytes_per_sec": 0, 00:09:27.336 "w_mbytes_per_sec": 0 00:09:27.336 }, 00:09:27.336 "claimed": false, 00:09:27.336 "zoned": false, 00:09:27.336 "supported_io_types": { 00:09:27.336 "read": true, 00:09:27.336 "write": true, 00:09:27.336 "unmap": true, 00:09:27.336 "flush": false, 00:09:27.336 "reset": true, 00:09:27.336 "nvme_admin": false, 00:09:27.336 "nvme_io": false, 00:09:27.336 "nvme_io_md": false, 00:09:27.336 "write_zeroes": true, 00:09:27.336 "zcopy": false, 00:09:27.336 "get_zone_info": false, 00:09:27.336 "zone_management": false, 00:09:27.336 "zone_append": false, 00:09:27.336 "compare": false, 00:09:27.336 "compare_and_write": false, 00:09:27.336 "abort": false, 00:09:27.336 "seek_hole": true, 00:09:27.336 "seek_data": true, 00:09:27.336 "copy": false, 00:09:27.336 "nvme_iov_md": false 00:09:27.336 }, 00:09:27.336 "driver_specific": { 00:09:27.336 "lvol": { 00:09:27.336 "lvol_store_uuid": "8b89956e-e6b7-4709-b87c-0e59a15ccb1a", 00:09:27.336 "base_bdev": "aio_bdev", 00:09:27.336 "thin_provision": false, 00:09:27.336 "num_allocated_clusters": 38, 00:09:27.336 "snapshot": false, 00:09:27.336 "clone": false, 00:09:27.336 "esnap_clone": false 00:09:27.336 } 00:09:27.336 } 00:09:27.336 } 00:09:27.336 ] 00:09:27.594 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:27.594 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:27.594 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:27.852 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:27.852 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:27.852 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:27.852 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:27.852 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01620f9d-1957-4805-9050-0795b7c0b188 00:09:28.418 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b89956e-e6b7-4709-b87c-0e59a15ccb1a 00:09:28.418 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.676 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.676 00:09:28.676 real 0m19.367s 00:09:28.676 user 0m49.192s 00:09:28.676 sys 0m5.088s 00:09:28.676 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.676 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.676 ************************************ 00:09:28.676 END TEST lvs_grow_dirty 00:09:28.676 ************************************ 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:28.933 nvmf_trace.0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.933 rmmod nvme_tcp 00:09:28.933 rmmod nvme_fabrics 00:09:28.933 rmmod nvme_keyring 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 437324 ']' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 437324 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 437324 ']' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 437324 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 437324 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 437324' 00:09:28.933 killing process with pid 437324 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 437324 00:09:28.933 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 437324 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.190 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:31.719 00:09:31.719 real 0m43.080s 00:09:31.719 user 1m12.345s 00:09:31.719 sys 0m8.937s 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.719 ************************************ 00:09:31.719 END TEST nvmf_lvs_grow 00:09:31.719 ************************************ 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.719 ************************************ 00:09:31.719 START TEST nvmf_bdev_io_wait 00:09:31.719 ************************************ 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:31.719 * Looking for test storage... 00:09:31.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.719 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:33.617 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:33.617 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:33.618 Found net devices under 0000:82:00.0: cvl_0_0 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:33.618 Found net devices under 0000:82:00.1: cvl_0_1 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.618 09:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:09:33.618 00:09:33.618 --- 10.0.0.2 ping statistics --- 00:09:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.618 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:33.618 00:09:33.618 --- 10.0.0.1 ping statistics --- 00:09:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.618 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=439962 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:33.618 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 439962 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 439962 ']' 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.619 [2024-07-25 09:24:06.131394] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:33.619 [2024-07-25 09:24:06.131486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.619 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.619 [2024-07-25 09:24:06.199926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.619 [2024-07-25 09:24:06.312908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.619 [2024-07-25 09:24:06.312966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.619 [2024-07-25 09:24:06.312980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.619 [2024-07-25 09:24:06.313007] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.619 [2024-07-25 09:24:06.313017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.619 [2024-07-25 09:24:06.313095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.619 [2024-07-25 09:24:06.313118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.619 [2024-07-25 09:24:06.313174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.619 [2024-07-25 09:24:06.313177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.619 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 [2024-07-25 09:24:06.454257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 Malloc0 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 [2024-07-25 09:24:06.514254] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=439990 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=439992 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.877 { 00:09:33.877 "params": { 00:09:33.877 "name": "Nvme$subsystem", 00:09:33.877 "trtype": "$TEST_TRANSPORT", 00:09:33.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.877 "adrfam": "ipv4", 00:09:33.877 "trsvcid": "$NVMF_PORT", 00:09:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.877 "hdgst": ${hdgst:-false}, 00:09:33.877 "ddgst": ${ddgst:-false} 00:09:33.877 }, 00:09:33.877 "method": "bdev_nvme_attach_controller" 00:09:33.877 } 00:09:33.877 EOF 00:09:33.877 )") 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=439994 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.877 { 00:09:33.877 "params": { 00:09:33.877 "name": "Nvme$subsystem", 00:09:33.877 "trtype": "$TEST_TRANSPORT", 00:09:33.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.877 "adrfam": "ipv4", 00:09:33.877 "trsvcid": "$NVMF_PORT", 00:09:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.877 "hdgst": ${hdgst:-false}, 00:09:33.877 "ddgst": ${ddgst:-false} 00:09:33.877 }, 00:09:33.877 "method": "bdev_nvme_attach_controller" 00:09:33.877 } 00:09:33.877 EOF 00:09:33.877 )") 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=439997 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.877 { 00:09:33.877 "params": { 00:09:33.877 "name": "Nvme$subsystem", 00:09:33.877 "trtype": "$TEST_TRANSPORT", 00:09:33.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.877 "adrfam": "ipv4", 00:09:33.877 "trsvcid": "$NVMF_PORT", 00:09:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.877 "hdgst": ${hdgst:-false}, 00:09:33.877 "ddgst": ${ddgst:-false} 00:09:33.877 }, 00:09:33.877 "method": "bdev_nvme_attach_controller" 00:09:33.877 } 00:09:33.877 EOF 00:09:33.877 )") 00:09:33.877 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.878 { 00:09:33.878 "params": { 00:09:33.878 "name": "Nvme$subsystem", 00:09:33.878 "trtype": "$TEST_TRANSPORT", 00:09:33.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.878 "adrfam": "ipv4", 00:09:33.878 "trsvcid": "$NVMF_PORT", 00:09:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.878 "hdgst": ${hdgst:-false}, 00:09:33.878 "ddgst": ${ddgst:-false} 00:09:33.878 }, 00:09:33.878 "method": "bdev_nvme_attach_controller" 00:09:33.878 } 00:09:33.878 EOF 00:09:33.878 )") 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 439990 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.878 "params": { 00:09:33.878 "name": "Nvme1", 00:09:33.878 "trtype": "tcp", 00:09:33.878 "traddr": "10.0.0.2", 00:09:33.878 "adrfam": "ipv4", 00:09:33.878 "trsvcid": "4420", 00:09:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.878 "hdgst": false, 00:09:33.878 "ddgst": false 00:09:33.878 }, 00:09:33.878 "method": "bdev_nvme_attach_controller" 00:09:33.878 }' 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.878 "params": { 00:09:33.878 "name": "Nvme1", 00:09:33.878 "trtype": "tcp", 00:09:33.878 "traddr": "10.0.0.2", 00:09:33.878 "adrfam": "ipv4", 00:09:33.878 "trsvcid": "4420", 00:09:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.878 "hdgst": false, 00:09:33.878 "ddgst": false 00:09:33.878 }, 00:09:33.878 "method": "bdev_nvme_attach_controller" 00:09:33.878 }' 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.878 "params": { 00:09:33.878 "name": "Nvme1", 00:09:33.878 "trtype": "tcp", 00:09:33.878 "traddr": "10.0.0.2", 00:09:33.878 "adrfam": "ipv4", 00:09:33.878 "trsvcid": "4420", 00:09:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.878 "hdgst": false, 00:09:33.878 "ddgst": false 00:09:33.878 }, 00:09:33.878 "method": "bdev_nvme_attach_controller" 00:09:33.878 }' 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.878 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.878 "params": { 00:09:33.878 "name": "Nvme1", 00:09:33.878 "trtype": "tcp", 00:09:33.878 "traddr": "10.0.0.2", 00:09:33.878 "adrfam": "ipv4", 00:09:33.878 "trsvcid": "4420", 00:09:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.878 "hdgst": false, 00:09:33.878 "ddgst": false 00:09:33.878 }, 00:09:33.878 "method": "bdev_nvme_attach_controller" 00:09:33.878 }' 00:09:33.878 [2024-07-25 09:24:06.561652] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:33.878 [2024-07-25 09:24:06.561652] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:33.878 [2024-07-25 09:24:06.561652] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:33.878 [2024-07-25 09:24:06.561651] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:33.878 [2024-07-25 09:24:06.561749] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 09:24:06.561748] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 09:24:06.561749] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 09:24:06.561749] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:33.878 --proc-type=auto ] 00:09:33.878 --proc-type=auto ] 00:09:33.878 --proc-type=auto ] 00:09:33.878 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.136 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.136 [2024-07-25 09:24:06.733230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.136 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.136 [2024-07-25 09:24:06.833641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.136 [2024-07-25 09:24:06.838235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.393 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.393 [2024-07-25 09:24:06.935652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:34.393 [2024-07-25 09:24:06.939433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.393 [2024-07-25 09:24:07.039080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:34.393 [2024-07-25 09:24:07.042539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.651 [2024-07-25 09:24:07.143480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:34.651 Running I/O for 1 seconds... 00:09:34.651 Running I/O for 1 seconds... 00:09:34.651 Running I/O for 1 seconds... 00:09:34.651 Running I/O for 1 seconds... 00:09:35.585 00:09:35.585 Latency(us) 00:09:35.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.585 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:35.585 Nvme1n1 : 1.01 10399.15 40.62 0.00 0.00 12260.99 6941.96 20000.62 00:09:35.585 =================================================================================================================== 00:09:35.585 Total : 10399.15 40.62 0.00 0.00 12260.99 6941.96 20000.62 00:09:35.585 00:09:35.585 Latency(us) 00:09:35.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.585 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:35.585 Nvme1n1 : 1.01 8253.29 32.24 0.00 0.00 15425.43 9806.13 25826.04 00:09:35.585 =================================================================================================================== 00:09:35.585 Total : 8253.29 32.24 0.00 0.00 15425.43 9806.13 25826.04 00:09:35.842 00:09:35.842 Latency(us) 00:09:35.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.842 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:35.842 Nvme1n1 : 1.01 9460.93 36.96 0.00 0.00 13475.78 6602.15 26214.40 00:09:35.842 =================================================================================================================== 00:09:35.842 Total : 9460.93 36.96 0.00 0.00 13475.78 6602.15 26214.40 00:09:35.842 00:09:35.842 Latency(us) 00:09:35.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.842 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:35.842 Nvme1n1 : 1.00 198877.97 776.87 0.00 0.00 641.15 268.52 849.54 00:09:35.842 =================================================================================================================== 00:09:35.842 Total : 198877.97 776.87 0.00 0.00 641.15 268.52 849.54 00:09:35.842 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 439992 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 439994 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 439997 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.100 rmmod nvme_tcp 00:09:36.100 rmmod nvme_fabrics 00:09:36.100 rmmod nvme_keyring 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 439962 ']' 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 439962 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 439962 ']' 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 439962 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 439962 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 439962' 00:09:36.100 killing process with pid 439962 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 439962 00:09:36.100 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 439962 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.358 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.887 00:09:38.887 real 0m7.196s 00:09:38.887 user 0m16.916s 00:09:38.887 sys 0m3.513s 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 ************************************ 00:09:38.887 END TEST nvmf_bdev_io_wait 00:09:38.887 ************************************ 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 ************************************ 00:09:38.887 START TEST nvmf_queue_depth 00:09:38.887 ************************************ 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.887 * Looking for test storage... 00:09:38.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.887 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.888 09:24:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.785 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:40.786 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:40.786 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:40.786 Found net devices under 0000:82:00.0: cvl_0_0 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:40.786 Found net devices under 0000:82:00.1: cvl_0_1 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:09:40.786 00:09:40.786 --- 10.0.0.2 ping statistics --- 00:09:40.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.786 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:09:40.786 00:09:40.786 --- 10.0.0.1 ping statistics --- 00:09:40.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.786 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=442729 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 442729 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 442729 ']' 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.786 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.787 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.787 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.787 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.787 [2024-07-25 09:24:13.286480] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:40.787 [2024-07-25 09:24:13.286582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.787 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.787 [2024-07-25 09:24:13.351685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.787 [2024-07-25 09:24:13.461854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.787 [2024-07-25 09:24:13.461921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.787 [2024-07-25 09:24:13.461950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.787 [2024-07-25 09:24:13.461961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.787 [2024-07-25 09:24:13.461971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.787 [2024-07-25 09:24:13.462002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-07-25 09:24:13.607317] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 Malloc0 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-07-25 09:24:13.675029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=442868 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 442868 /var/tmp/bdevperf.sock 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 442868 ']' 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.044 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-07-25 09:24:13.724446] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:41.044 [2024-07-25 09:24:13.724518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442868 ] 00:09:41.044 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.302 [2024-07-25 09:24:13.791222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.302 [2024-07-25 09:24:13.907029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.302 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.302 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:41.302 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:41.302 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.302 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.559 NVMe0n1 00:09:41.559 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.559 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:41.817 Running I/O for 10 seconds... 00:09:51.790 00:09:51.790 Latency(us) 00:09:51.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.790 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:51.790 Verification LBA range: start 0x0 length 0x4000 00:09:51.790 NVMe0n1 : 10.12 8584.88 33.53 0.00 0.00 118355.59 21748.24 86216.25 00:09:51.790 =================================================================================================================== 00:09:51.790 Total : 8584.88 33.53 0.00 0.00 118355.59 21748.24 86216.25 00:09:51.790 0 00:09:51.790 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 442868 00:09:51.790 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 442868 ']' 00:09:51.790 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 442868 00:09:51.790 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:51.790 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.790 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 442868 00:09:52.048 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.048 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.048 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 442868' 00:09:52.048 killing process with pid 442868 00:09:52.048 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 442868 00:09:52.048 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.048 00:09:52.048 Latency(us) 00:09:52.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.048 =================================================================================================================== 00:09:52.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.048 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 442868 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.307 rmmod nvme_tcp 00:09:52.307 rmmod nvme_fabrics 00:09:52.307 rmmod nvme_keyring 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 442729 ']' 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 442729 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 442729 ']' 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 442729 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 442729 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 442729' 00:09:52.307 killing process with pid 442729 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 442729 00:09:52.307 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 442729 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.566 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:55.099 00:09:55.099 real 0m16.100s 00:09:55.099 user 0m22.585s 00:09:55.099 sys 0m3.257s 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.099 ************************************ 00:09:55.099 END TEST nvmf_queue_depth 00:09:55.099 ************************************ 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.099 ************************************ 00:09:55.099 START TEST nvmf_target_multipath 00:09:55.099 ************************************ 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.099 * Looking for test storage... 00:09:55.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.099 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.100 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:57.002 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:57.002 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:57.002 Found net devices under 0000:82:00.0: cvl_0_0 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:57.002 Found net devices under 0000:82:00.1: cvl_0_1 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.002 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:57.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:09:57.003 00:09:57.003 --- 10.0.0.2 ping statistics --- 00:09:57.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.003 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:09:57.003 00:09:57.003 --- 10.0.0.1 ping statistics --- 00:09:57.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.003 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:57.003 only one NIC for nvmf test 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.003 rmmod nvme_tcp 00:09:57.003 rmmod nvme_fabrics 00:09:57.003 rmmod nvme_keyring 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.003 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.903 00:09:58.903 real 0m4.278s 00:09:58.903 user 0m0.800s 00:09:58.903 sys 0m1.470s 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:58.903 ************************************ 00:09:58.903 END TEST nvmf_target_multipath 00:09:58.903 ************************************ 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.903 ************************************ 00:09:58.903 START TEST nvmf_zcopy 00:09:58.903 ************************************ 00:09:58.903 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.161 * Looking for test storage... 00:09:59.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.161 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.162 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.062 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:01.062 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:01.063 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:01.063 Found net devices under 0000:82:00.0: cvl_0_0 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:01.063 Found net devices under 0000:82:00.1: cvl_0_1 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:01.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:10:01.063 00:10:01.063 --- 10.0.0.2 ping statistics --- 00:10:01.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.063 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:10:01.063 00:10:01.063 --- 10.0.0.1 ping statistics --- 00:10:01.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.063 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=447940 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 447940 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 447940 ']' 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.063 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.321 [2024-07-25 09:24:33.822398] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:01.322 [2024-07-25 09:24:33.822468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.322 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.322 [2024-07-25 09:24:33.883462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.322 [2024-07-25 09:24:33.994339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.322 [2024-07-25 09:24:33.994429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.322 [2024-07-25 09:24:33.994458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.322 [2024-07-25 09:24:33.994471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.322 [2024-07-25 09:24:33.994481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.322 [2024-07-25 09:24:33.994523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 [2024-07-25 09:24:34.140266] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 [2024-07-25 09:24:34.156490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 malloc0 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:01.580 { 00:10:01.580 "params": { 00:10:01.580 "name": "Nvme$subsystem", 00:10:01.580 "trtype": "$TEST_TRANSPORT", 00:10:01.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.580 "adrfam": "ipv4", 00:10:01.580 "trsvcid": "$NVMF_PORT", 00:10:01.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.580 "hdgst": ${hdgst:-false}, 00:10:01.580 "ddgst": ${ddgst:-false} 00:10:01.580 }, 00:10:01.580 "method": "bdev_nvme_attach_controller" 00:10:01.580 } 00:10:01.580 EOF 00:10:01.580 )") 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:01.580 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:01.580 "params": { 00:10:01.580 "name": "Nvme1", 00:10:01.580 "trtype": "tcp", 00:10:01.580 "traddr": "10.0.0.2", 00:10:01.581 "adrfam": "ipv4", 00:10:01.581 "trsvcid": "4420", 00:10:01.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.581 "hdgst": false, 00:10:01.581 "ddgst": false 00:10:01.581 }, 00:10:01.581 "method": "bdev_nvme_attach_controller" 00:10:01.581 }' 00:10:01.581 [2024-07-25 09:24:34.252433] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:01.581 [2024-07-25 09:24:34.252517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448083 ] 00:10:01.581 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.838 [2024-07-25 09:24:34.321001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.838 [2024-07-25 09:24:34.439420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.097 Running I/O for 10 seconds... 00:10:12.067 00:10:12.067 Latency(us) 00:10:12.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.067 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:12.067 Verification LBA range: start 0x0 length 0x1000 00:10:12.067 Nvme1n1 : 10.02 5652.82 44.16 0.00 0.00 22581.90 3349.62 32622.36 00:10:12.067 =================================================================================================================== 00:10:12.067 Total : 5652.82 44.16 0.00 0.00 22581.90 3349.62 32622.36 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=449279 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:12.325 { 00:10:12.325 "params": { 00:10:12.325 "name": "Nvme$subsystem", 00:10:12.325 "trtype": "$TEST_TRANSPORT", 00:10:12.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.325 "adrfam": "ipv4", 00:10:12.325 "trsvcid": "$NVMF_PORT", 00:10:12.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.325 "hdgst": ${hdgst:-false}, 00:10:12.325 "ddgst": ${ddgst:-false} 00:10:12.325 }, 00:10:12.325 "method": "bdev_nvme_attach_controller" 00:10:12.325 } 00:10:12.325 EOF 00:10:12.325 )") 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:12.325 [2024-07-25 09:24:44.991595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:44.991653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:12.325 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:12.325 "params": { 00:10:12.325 "name": "Nvme1", 00:10:12.325 "trtype": "tcp", 00:10:12.325 "traddr": "10.0.0.2", 00:10:12.325 "adrfam": "ipv4", 00:10:12.325 "trsvcid": "4420", 00:10:12.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.325 "hdgst": false, 00:10:12.325 "ddgst": false 00:10:12.325 }, 00:10:12.325 "method": "bdev_nvme_attach_controller" 00:10:12.325 }' 00:10:12.325 [2024-07-25 09:24:44.999554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:44.999579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.007579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.007603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.015592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.015615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.023613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.023650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.027948] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:12.325 [2024-07-25 09:24:45.028019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449279 ] 00:10:12.325 [2024-07-25 09:24:45.031650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.031671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.039675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.039697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.047705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.047725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 [2024-07-25 09:24:45.055725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.325 [2024-07-25 09:24:45.055746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.325 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.583 [2024-07-25 09:24:45.063774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.583 [2024-07-25 09:24:45.063806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.583 [2024-07-25 09:24:45.071790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.583 [2024-07-25 09:24:45.071817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.583 [2024-07-25 09:24:45.079808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.583 [2024-07-25 09:24:45.079834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.583 [2024-07-25 09:24:45.087827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.583 [2024-07-25 09:24:45.087853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.583 [2024-07-25 09:24:45.092298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.583 [2024-07-25 09:24:45.095853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.583 [2024-07-25 09:24:45.095879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.583 [2024-07-25 09:24:45.103903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.103941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.111896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.111923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.119915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.119941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.127936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.127961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.135963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.135988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.143982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.144007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.152003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.152028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.160049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.160083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.168070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.168105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.176071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.176097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.184091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.184117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.192115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.192140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.200137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.200162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.208163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.208189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.214327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.584 [2024-07-25 09:24:45.216181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.216206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.224202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.224227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.232250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.232284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.240276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.240313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.248297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.248335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.256326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.256374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.264347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.264405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.272382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.272431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.280425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.280467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.288399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.288440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.296456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.296491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.304479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.304539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.584 [2024-07-25 09:24:45.312472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.584 [2024-07-25 09:24:45.312500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.320514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.320544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.328495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.328520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.336512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.336534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.344560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.344585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.352572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.352597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.360585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.360609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.368606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.368630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.376630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.376667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.384673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.384699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.392692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.392730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.400726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.400751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.408739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.408762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.416773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.416800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.424801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.424829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.432817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.432843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.440847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.440878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.448870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.448898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 Running I/O for 5 seconds... 00:10:12.842 [2024-07-25 09:24:45.456889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.456915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.472451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.472478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.483677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.483719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.495583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.495608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.507166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.507197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.518619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.518661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.530590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.530618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.542894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.542925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.553994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.554025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.565274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.565311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.842 [2024-07-25 09:24:45.576893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.842 [2024-07-25 09:24:45.576933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.588848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.588880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.601944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.601975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.612098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.612129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.623841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.623873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.635477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.635503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.649044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.649074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.660581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.660607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.673887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.673918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.685085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.685116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.696665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.696690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.707942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.707978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.719075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.719105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.729933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.729963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.743089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.743120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.752906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.752937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.764755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.764785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.776085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.776115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.789498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.789531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.800034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.800065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.811749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.811775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.100 [2024-07-25 09:24:45.823780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.100 [2024-07-25 09:24:45.823810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.836079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.836110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.847967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.847998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.859480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.859506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.870371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.870414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.881537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.881563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.893600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.893626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.906105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.906136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.918058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.918088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.930135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.930169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.941826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.941857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.954061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.954092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.965869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.965900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.979912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.979943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:45.991330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:45.991371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.003484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.003510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.014950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.014993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.026599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.026626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.038514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.038545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.050057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.050088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.061582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.061608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.072862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.072893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.358 [2024-07-25 09:24:46.084272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.358 [2024-07-25 09:24:46.084303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.616 [2024-07-25 09:24:46.096715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.616 [2024-07-25 09:24:46.096747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.616 [2024-07-25 09:24:46.108196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.616 [2024-07-25 09:24:46.108227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.616 [2024-07-25 09:24:46.120014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.616 [2024-07-25 09:24:46.120046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.131823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.131854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.143519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.143545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.155092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.155123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.168607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.168634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.180098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.180129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.191821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.191851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.203127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.203158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.214870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.214901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.227316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.227346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.241010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.241041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.252565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.252591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.264266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.264296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.276320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.276351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.288030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.288060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.299348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.299404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.310938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.310969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.322568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.322593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.334380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.334422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.617 [2024-07-25 09:24:46.346478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.617 [2024-07-25 09:24:46.346504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.358852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.358883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.370484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.370510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.382197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.382227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.393558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.393584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.405446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.405472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.416819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.416849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.428382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.428424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.440540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.440567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.452405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.452430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.463919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.463950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.475578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.475614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.487089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.487121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.498485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.498514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.509969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.510000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.521352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.521405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.532946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.532977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.543977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.544008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.555719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.555750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.567657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.567683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.579469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.579495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.593002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.593033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.876 [2024-07-25 09:24:46.603975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.876 [2024-07-25 09:24:46.604007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.616184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.616216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.627365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.627408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.638919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.638949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.650477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.650503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.661887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.661917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.673656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.673680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.685670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.685695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.698326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.698365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.709873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.709903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.721579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.721605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.732613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.732657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.744134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.744165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.756182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.756213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.767881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.767911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.779772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.779803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.791509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.791535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.803398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.803424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.814173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.814199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.824786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.824811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.837574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.837601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.847655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.847680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.858035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.858060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.135 [2024-07-25 09:24:46.869000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.135 [2024-07-25 09:24:46.869025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.879871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.879896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.891686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.891726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.901944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.901968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.912517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.912544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.925058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.925082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.935103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.935128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.945390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.945416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.956100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.956125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.966143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.966167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.976315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.976362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:46.986593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:46.986620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.000130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.000155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.010407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.010433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.020889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.020915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.031271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.031296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.041520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.041546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.053447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.053474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.065665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.065690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.077417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.077443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.089074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.089104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.100690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.100740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.112745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.112771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.394 [2024-07-25 09:24:47.124540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.394 [2024-07-25 09:24:47.124575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.136601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.136628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.148346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.148401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.159913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.159943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.172031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.172062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.183672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.183713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.195706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.195736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.207745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.207776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.219413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.219439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.231019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.231049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.242695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.242739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.254220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.254251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.266443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.266470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.278917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.278948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.290882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.290913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.304625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.304665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.315793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.315823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.327463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.327498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.338729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.338759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.350375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.350424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.362067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.653 [2024-07-25 09:24:47.362097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.653 [2024-07-25 09:24:47.375673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.654 [2024-07-25 09:24:47.375714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.654 [2024-07-25 09:24:47.387486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.654 [2024-07-25 09:24:47.387513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.399287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.399318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.411068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.411099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.422354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.422408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.433841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.433872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.445680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.445721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.456754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.456785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.468505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.468531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.480714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.480738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.492273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.492304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.504029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.504061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.518140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.518171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.529422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.529448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.541331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.541371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.553006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.553045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.565203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.565233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.577097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.577127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.588877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.588908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.600450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.600476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.611979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.612011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.623377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.623420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.634959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.634990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.913 [2024-07-25 09:24:47.647326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.913 [2024-07-25 09:24:47.647368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.659309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.659341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.670896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.670927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.682501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.682527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.694262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.694292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.707135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.707167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.717273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.717304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.729621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.729664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.741691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.741722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.754954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.172 [2024-07-25 09:24:47.754990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.172 [2024-07-25 09:24:47.765469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.765495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.777017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.777057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.788400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.788426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.801889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.801920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.812856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.812886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.824173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.824211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.836018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.836048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.847761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.847805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.859592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.859618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.871583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.871609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.883304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.883334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.894818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.894849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.173 [2024-07-25 09:24:47.907006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.173 [2024-07-25 09:24:47.907040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.919083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.431 [2024-07-25 09:24:47.919115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.931190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.431 [2024-07-25 09:24:47.931221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.943449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.431 [2024-07-25 09:24:47.943475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.956933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.431 [2024-07-25 09:24:47.956964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.968530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.431 [2024-07-25 09:24:47.968556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.982675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.431 [2024-07-25 09:24:47.982716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.431 [2024-07-25 09:24:47.994222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:47.994252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.005848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.005879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.017259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.017290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.029469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.029496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.041075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.041106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.052208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.052240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.063741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.063774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.075557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.075584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.087285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.087315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.101076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.101107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.112194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.112225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.123586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.123611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.134732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.134762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.146280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.146311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.432 [2024-07-25 09:24:48.159969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.432 [2024-07-25 09:24:48.159999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.690 [2024-07-25 09:24:48.171738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.690 [2024-07-25 09:24:48.171769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.690 [2024-07-25 09:24:48.184551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.690 [2024-07-25 09:24:48.184577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.194799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.194829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.206573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.206600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.218153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.218183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.229732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.229762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.240807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.240838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.254460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.254486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.265552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.265578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.277328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.277368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.289050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.289081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.302168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.302200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.312951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.312981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.324416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.324442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.335693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.335735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.347282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.347312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.358726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.358757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.370939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.370970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.383096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.383127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.395127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.395158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.406464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.406490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.691 [2024-07-25 09:24:48.420107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.691 [2024-07-25 09:24:48.420138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.949 [2024-07-25 09:24:48.431648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.431675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.443291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.443322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.455050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.455081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.468162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.468192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.478493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.478519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.490145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.490175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.501777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.501808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.513524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.513551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.525000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.525030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.536280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.536311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.547998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.548028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.559627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.559667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.571906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.571936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.585595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.585620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.596375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.596416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.608208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.608239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.620072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.620115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.631753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.631793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.643464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.643490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.655628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.655667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.667571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.667597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.950 [2024-07-25 09:24:48.679133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.950 [2024-07-25 09:24:48.679163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.208 [2024-07-25 09:24:48.691462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.691490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.703332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.703374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.715528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.715555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.727547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.727573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.739400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.739425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.751351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.751406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.762871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.762902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.774446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.774473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.785878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.785909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.797499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.797525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.809482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.809508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.821055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.821086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.832788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.832819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.844571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.844598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.856612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.856653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.868456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.868483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.880334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.880375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.891611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.891658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.903098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.903129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.914245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.914275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.926631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.926671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.209 [2024-07-25 09:24:48.939128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.209 [2024-07-25 09:24:48.939158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.467 [2024-07-25 09:24:48.951453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.467 [2024-07-25 09:24:48.951479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.467 [2024-07-25 09:24:48.963012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.467 [2024-07-25 09:24:48.963042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.467 [2024-07-25 09:24:48.976292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:48.976322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:48.987083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:48.987113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:48.998714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:48.998739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.009876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.009906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.021370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.021412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.032604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.032630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.044051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.044081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.055756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.055788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.067553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.067579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.079272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.079302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.091805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.091836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.103611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.103651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.115326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.115374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.127370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.127413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.139113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.139144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.150850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.150880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.162418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.162444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.174223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.174253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.185514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.185540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.468 [2024-07-25 09:24:49.197321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.468 [2024-07-25 09:24:49.197351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.209672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.209698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.221364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.221409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.232519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.232545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.244503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.244529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.256787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.256818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.268093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.268124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.279336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.279376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.291065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.291096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.302477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.302503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.313807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.313837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.325259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.325290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.336821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.336863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.349050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.349082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.360848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.360879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.372034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.372059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.383613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.383657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.395819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.395850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.407659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.407685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.419028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.419058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.430475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.430501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.442446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.442472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.726 [2024-07-25 09:24:49.454093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.726 [2024-07-25 09:24:49.454123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.466828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.466859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.478687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.478732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.490568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.490595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.502987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.503018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.514482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.514509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.526677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.526719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.538441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.538468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.550106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.550136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.561956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.561993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.573838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.573868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.585878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.585908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.597601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.597631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.609561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.609589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.621649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.621680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.633762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.633794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.645693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.645738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.659512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.659540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.670786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.670816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.682928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.682958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.694438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.694465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.706418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.706444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.985 [2024-07-25 09:24:49.718484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.985 [2024-07-25 09:24:49.718512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.730635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.730675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.742511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.742537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.754401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.754426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.766043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.766073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.779540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.779566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.790927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.790957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.803340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.803381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.814875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.814905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.826720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.826751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.244 [2024-07-25 09:24:49.840407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.244 [2024-07-25 09:24:49.840433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.852093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.852132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.863762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.863792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.876120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.876150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.888233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.888263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.900256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.900287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.911756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.911787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.923312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.923343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.934992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.935023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.947113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.947143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.958611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.958652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.245 [2024-07-25 09:24:49.970317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.245 [2024-07-25 09:24:49.970348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:49.983062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:49.983093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:49.996797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:49.996828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.008475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.008505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.020270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.020301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.031949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.031979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.043620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.043667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.055287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.055317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.066681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.066709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.078945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.078976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.094791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.094822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.105812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.105842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.117778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.117808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.129582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.129608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.141266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.141297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.153558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.153584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.165589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.165615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.177719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.177749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.189051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.189081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.200433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.200460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.212041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.212071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.223244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.223275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-07-25 09:24:50.234762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-07-25 09:24:50.234793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.761 [2024-07-25 09:24:50.247099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.247130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.258974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.259004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.270671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.270695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.282415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.282441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.294060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.294091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.307585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.307613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.319230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.319260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.330990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.331020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.342794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.342824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.354924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.354956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.367485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.367513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.379517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.379544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.391457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.391483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.403407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.403452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.416862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.416892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.428028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.428058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.439454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.439481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.450895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.450925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.462416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.462444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.472893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.472923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 00:10:17.762 Latency(us) 00:10:17.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.762 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:17.762 Nvme1n1 : 5.01 10895.59 85.12 0.00 0.00 11731.27 5000.15 22622.06 00:10:17.762 =================================================================================================================== 00:10:17.762 Total : 10895.59 85.12 0.00 0.00 11731.27 5000.15 22622.06 00:10:17.762 [2024-07-25 09:24:50.477787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.477816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.485817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.485846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-07-25 09:24:50.493842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-07-25 09:24:50.493872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-07-25 09:24:50.501879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-07-25 09:24:50.501915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-07-25 09:24:50.509924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-07-25 09:24:50.509972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-07-25 09:24:50.517937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.517986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.525962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.526011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.533985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.534032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.542007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.542055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.550028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.550076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.558047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.558093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.566075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.566122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.574096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.574144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.582115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.582162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.590140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.590203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.598161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.598205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.606181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.606227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.614207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.614253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.622197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.622228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.630208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.630234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.638230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.638255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.646253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.646278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.654264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.654287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.662338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.662391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.670361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.670406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.678371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.678431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.686372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.686411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.694404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.694426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.702424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.702447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.710446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.710468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.718491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.718537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.726513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.726557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.734491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.734515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.742506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.742534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.021 [2024-07-25 09:24:50.750528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.021 [2024-07-25 09:24:50.750549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (449279) - No such process 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 449279 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.280 delay0 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.280 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:18.280 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.280 [2024-07-25 09:24:50.872294] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:24.891 Initializing NVMe Controllers 00:10:24.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:24.891 Initialization complete. Launching workers. 00:10:24.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 74 00:10:24.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 361, failed to submit 33 00:10:24.891 success 166, unsuccess 195, failed 0 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.891 rmmod nvme_tcp 00:10:24.891 rmmod nvme_fabrics 00:10:24.891 rmmod nvme_keyring 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 447940 ']' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 447940 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 447940 ']' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 447940 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 447940 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 447940' 00:10:24.891 killing process with pid 447940 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 447940 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 447940 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.891 09:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.831 00:10:26.831 real 0m27.889s 00:10:26.831 user 0m41.225s 00:10:26.831 sys 0m8.318s 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:26.831 ************************************ 00:10:26.831 END TEST nvmf_zcopy 00:10:26.831 ************************************ 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.831 ************************************ 00:10:26.831 START TEST nvmf_nmic 00:10:26.831 ************************************ 00:10:26.831 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.090 * Looking for test storage... 00:10:27.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.090 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.091 09:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:28.990 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:28.990 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:28.990 Found net devices under 0000:82:00.0: cvl_0_0 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.990 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:28.990 Found net devices under 0000:82:00.1: cvl_0_1 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:28.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:10:28.991 00:10:28.991 --- 10.0.0.2 ping statistics --- 00:10:28.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.991 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:10:28.991 00:10:28.991 --- 10.0.0.1 ping statistics --- 00:10:28.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.991 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.991 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=452678 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 452678 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 452678 ']' 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.249 09:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.249 [2024-07-25 09:25:01.784816] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:29.249 [2024-07-25 09:25:01.784902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.249 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.250 [2024-07-25 09:25:01.850702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.250 [2024-07-25 09:25:01.967941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.250 [2024-07-25 09:25:01.967996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.250 [2024-07-25 09:25:01.968013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.250 [2024-07-25 09:25:01.968026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.250 [2024-07-25 09:25:01.968038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.250 [2024-07-25 09:25:01.968126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.250 [2024-07-25 09:25:01.968194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.250 [2024-07-25 09:25:01.968284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.250 [2024-07-25 09:25:01.968287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 [2024-07-25 09:25:02.768051] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 Malloc0 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 [2024-07-25 09:25:02.819102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:30.182 test case1: single bdev can't be used in multiple subsystems 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:30.182 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.183 [2024-07-25 09:25:02.842987] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:30.183 [2024-07-25 09:25:02.843015] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:30.183 [2024-07-25 09:25:02.843044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 request: 00:10:30.183 { 00:10:30.183 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:30.183 "namespace": { 00:10:30.183 "bdev_name": "Malloc0", 00:10:30.183 "no_auto_visible": false 00:10:30.183 }, 00:10:30.183 "method": "nvmf_subsystem_add_ns", 00:10:30.183 "req_id": 1 00:10:30.183 } 00:10:30.183 Got JSON-RPC error response 00:10:30.183 response: 00:10:30.183 { 00:10:30.183 "code": -32602, 00:10:30.183 "message": "Invalid parameters" 00:10:30.183 } 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:30.183 Adding namespace failed - expected result. 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:30.183 test case2: host connect to nvmf target in multiple paths 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.183 [2024-07-25 09:25:02.851094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.183 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.748 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:31.680 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.680 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:10:31.680 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.680 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:31.680 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:10:33.577 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.577 [global] 00:10:33.577 thread=1 00:10:33.577 invalidate=1 00:10:33.577 rw=write 00:10:33.577 time_based=1 00:10:33.577 runtime=1 00:10:33.577 ioengine=libaio 00:10:33.577 direct=1 00:10:33.577 bs=4096 00:10:33.577 iodepth=1 00:10:33.577 norandommap=0 00:10:33.577 numjobs=1 00:10:33.577 00:10:33.577 verify_dump=1 00:10:33.577 verify_backlog=512 00:10:33.577 verify_state_save=0 00:10:33.577 do_verify=1 00:10:33.577 verify=crc32c-intel 00:10:33.577 [job0] 00:10:33.577 filename=/dev/nvme0n1 00:10:33.578 Could not set queue depth (nvme0n1) 00:10:33.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.835 fio-3.35 00:10:33.835 Starting 1 thread 00:10:35.208 00:10:35.208 job0: (groupid=0, jobs=1): err= 0: pid=453320: Thu Jul 25 09:25:07 2024 00:10:35.208 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:35.208 slat (nsec): min=6490, max=52267, avg=12883.98, stdev=5848.16 00:10:35.208 clat (usec): min=177, max=473, avg=259.90, stdev=49.29 00:10:35.208 lat (usec): min=185, max=504, avg=272.78, stdev=53.25 00:10:35.208 clat percentiles (usec): 00:10:35.208 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 217], 00:10:35.208 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 269], 00:10:35.208 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 343], 00:10:35.208 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 433], 99.95th=[ 449], 00:10:35.208 | 99.99th=[ 474] 00:10:35.208 write: IOPS=2111, BW=8448KiB/s (8650kB/s)(8456KiB/1001msec); 0 zone resets 00:10:35.208 slat (nsec): min=8304, max=73445, avg=15541.39, stdev=7066.35 00:10:35.208 clat (usec): min=128, max=405, avg=184.85, stdev=41.92 00:10:35.208 lat (usec): min=137, max=420, avg=200.39, stdev=47.63 00:10:35.208 clat percentiles (usec): 00:10:35.208 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:10:35.208 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 188], 00:10:35.208 | 70.00th=[ 208], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 260], 00:10:35.208 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 343], 00:10:35.208 | 99.99th=[ 404] 00:10:35.208 bw ( KiB/s): min=10416, max=10416, per=100.00%, avg=10416.00, stdev= 0.00, samples=1 00:10:35.208 iops : min= 2604, max= 2604, avg=2604.00, stdev= 0.00, samples=1 00:10:35.208 lat (usec) : 250=70.62%, 500=29.38% 00:10:35.208 cpu : usr=4.60%, sys=8.10%, ctx=4162, majf=0, minf=2 00:10:35.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.209 issued rwts: total=2048,2114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.209 00:10:35.209 Run status group 0 (all jobs): 00:10:35.209 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:35.209 WRITE: bw=8448KiB/s (8650kB/s), 8448KiB/s-8448KiB/s (8650kB/s-8650kB/s), io=8456KiB (8659kB), run=1001-1001msec 00:10:35.209 00:10:35.209 Disk stats (read/write): 00:10:35.209 nvme0n1: ios=1847/2048, merge=0/0, ticks=466/369, in_queue=835, util=91.58% 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.209 rmmod nvme_tcp 00:10:35.209 rmmod nvme_fabrics 00:10:35.209 rmmod nvme_keyring 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 452678 ']' 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 452678 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 452678 ']' 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 452678 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 452678 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 452678' 00:10:35.209 killing process with pid 452678 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 452678 00:10:35.209 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 452678 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.468 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.052 00:10:38.052 real 0m10.658s 00:10:38.052 user 0m25.791s 00:10:38.052 sys 0m2.645s 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.052 ************************************ 00:10:38.052 END TEST nvmf_nmic 00:10:38.052 ************************************ 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.052 ************************************ 00:10:38.052 START TEST nvmf_fio_target 00:10:38.052 ************************************ 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:38.052 * Looking for test storage... 00:10:38.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.052 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.053 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:39.956 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:39.956 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:39.956 Found net devices under 0000:82:00.0: cvl_0_0 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:39.956 Found net devices under 0000:82:00.1: cvl_0_1 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.956 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:10:39.957 00:10:39.957 --- 10.0.0.2 ping statistics --- 00:10:39.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.957 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:39.957 00:10:39.957 --- 10.0.0.1 ping statistics --- 00:10:39.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.957 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=455397 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 455397 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 455397 ']' 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.957 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 [2024-07-25 09:25:12.568200] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:39.957 [2024-07-25 09:25:12.568289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.957 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.957 [2024-07-25 09:25:12.637840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.215 [2024-07-25 09:25:12.762966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.215 [2024-07-25 09:25:12.763016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.215 [2024-07-25 09:25:12.763033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.215 [2024-07-25 09:25:12.763047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.215 [2024-07-25 09:25:12.763059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.215 [2024-07-25 09:25:12.763132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.215 [2024-07-25 09:25:12.763165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.215 [2024-07-25 09:25:12.763194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.215 [2024-07-25 09:25:12.763197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.779 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.779 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:40.779 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:40.779 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.779 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.037 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.037 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.037 [2024-07-25 09:25:13.766919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.295 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.553 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:41.553 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.811 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:41.811 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.069 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:42.069 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.327 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:42.327 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:42.585 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.843 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:42.843 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.101 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:43.101 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.359 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:43.359 09:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:43.616 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:43.873 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:43.873 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.130 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.130 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.388 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.388 [2024-07-25 09:25:17.095992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.388 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:44.645 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:44.902 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.835 09:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:45.835 09:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:10:45.835 09:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.835 09:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:10:45.835 09:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:10:45.835 09:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:10:47.733 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:47.733 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:47.733 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.733 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:10:47.733 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.734 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:10:47.734 09:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:47.734 [global] 00:10:47.734 thread=1 00:10:47.734 invalidate=1 00:10:47.734 rw=write 00:10:47.734 time_based=1 00:10:47.734 runtime=1 00:10:47.734 ioengine=libaio 00:10:47.734 direct=1 00:10:47.734 bs=4096 00:10:47.734 iodepth=1 00:10:47.734 norandommap=0 00:10:47.734 numjobs=1 00:10:47.734 00:10:47.734 verify_dump=1 00:10:47.734 verify_backlog=512 00:10:47.734 verify_state_save=0 00:10:47.734 do_verify=1 00:10:47.734 verify=crc32c-intel 00:10:47.734 [job0] 00:10:47.734 filename=/dev/nvme0n1 00:10:47.734 [job1] 00:10:47.734 filename=/dev/nvme0n2 00:10:47.734 [job2] 00:10:47.734 filename=/dev/nvme0n3 00:10:47.734 [job3] 00:10:47.734 filename=/dev/nvme0n4 00:10:47.734 Could not set queue depth (nvme0n1) 00:10:47.734 Could not set queue depth (nvme0n2) 00:10:47.734 Could not set queue depth (nvme0n3) 00:10:47.734 Could not set queue depth (nvme0n4) 00:10:47.991 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.991 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.991 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.991 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.991 fio-3.35 00:10:47.991 Starting 4 threads 00:10:49.364 00:10:49.364 job0: (groupid=0, jobs=1): err= 0: pid=456497: Thu Jul 25 09:25:21 2024 00:10:49.364 read: IOPS=40, BW=163KiB/s (167kB/s)(164KiB/1007msec) 00:10:49.364 slat (nsec): min=9904, max=34785, avg=20337.32, stdev=6630.63 00:10:49.364 clat (usec): min=214, max=42078, avg=21342.25, stdev=20654.40 00:10:49.364 lat (usec): min=225, max=42091, avg=21362.59, stdev=20651.59 00:10:49.364 clat percentiles (usec): 00:10:49.364 | 1.00th=[ 215], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 453], 00:10:49.364 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[40633], 60.00th=[41157], 00:10:49.364 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:49.364 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.364 | 99.99th=[42206] 00:10:49.364 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:49.364 slat (usec): min=8, max=1114, avg=14.29, stdev=49.87 00:10:49.364 clat (usec): min=141, max=837, avg=231.19, stdev=57.81 00:10:49.365 lat (usec): min=151, max=1450, avg=245.48, stdev=79.38 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 202], 00:10:49.365 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 229], 00:10:49.365 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 297], 95.00th=[ 314], 00:10:49.365 | 99.00th=[ 351], 99.50th=[ 644], 99.90th=[ 840], 99.95th=[ 840], 00:10:49.365 | 99.99th=[ 840] 00:10:49.365 bw ( KiB/s): min= 4096, max= 4096, per=20.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.365 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.365 lat (usec) : 250=73.96%, 500=21.34%, 750=0.72%, 1000=0.18% 00:10:49.365 lat (msec) : 50=3.80% 00:10:49.365 cpu : usr=0.10%, sys=1.09%, ctx=557, majf=0, minf=1 00:10:49.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.365 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.365 job1: (groupid=0, jobs=1): err= 0: pid=456522: Thu Jul 25 09:25:21 2024 00:10:49.365 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:49.365 slat (nsec): min=6319, max=57253, avg=13337.36, stdev=6372.87 00:10:49.365 clat (usec): min=172, max=1076, avg=270.84, stdev=74.66 00:10:49.365 lat (usec): min=181, max=1088, avg=284.17, stdev=77.91 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:10:49.365 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 255], 60.00th=[ 265], 00:10:49.365 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 359], 95.00th=[ 449], 00:10:49.365 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 742], 99.95th=[ 824], 00:10:49.365 | 99.99th=[ 1074] 00:10:49.365 write: IOPS=2047, BW=8192KiB/s (8388kB/s)(8200KiB/1001msec); 0 zone resets 00:10:49.365 slat (usec): min=6, max=1075, avg=14.63, stdev=24.38 00:10:49.365 clat (usec): min=126, max=531, avg=181.28, stdev=39.64 00:10:49.365 lat (usec): min=135, max=1282, avg=195.91, stdev=48.78 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:10:49.365 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 180], 00:10:49.365 | 70.00th=[ 194], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 245], 00:10:49.365 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 371], 99.95th=[ 453], 00:10:49.365 | 99.99th=[ 529] 00:10:49.365 bw ( KiB/s): min= 9744, max= 9744, per=47.89%, avg=9744.00, stdev= 0.00, samples=1 00:10:49.365 iops : min= 2436, max= 2436, avg=2436.00, stdev= 0.00, samples=1 00:10:49.365 lat (usec) : 250=71.30%, 500=27.70%, 750=0.95%, 1000=0.02% 00:10:49.365 lat (msec) : 2=0.02% 00:10:49.365 cpu : usr=2.70%, sys=6.30%, ctx=4101, majf=0, minf=1 00:10:49.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.365 issued rwts: total=2048,2050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.365 job2: (groupid=0, jobs=1): err= 0: pid=456563: Thu Jul 25 09:25:21 2024 00:10:49.365 read: IOPS=1547, BW=6190KiB/s (6338kB/s)(6196KiB/1001msec) 00:10:49.365 slat (nsec): min=5508, max=60893, avg=15415.45, stdev=9346.67 00:10:49.365 clat (usec): min=198, max=41080, avg=360.34, stdev=1586.88 00:10:49.365 lat (usec): min=206, max=41098, avg=375.76, stdev=1587.16 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 239], 00:10:49.365 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:10:49.365 | 70.00th=[ 306], 80.00th=[ 338], 90.00th=[ 379], 95.00th=[ 433], 00:10:49.365 | 99.00th=[ 545], 99.50th=[ 603], 99.90th=[41157], 99.95th=[41157], 00:10:49.365 | 99.99th=[41157] 00:10:49.365 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:49.365 slat (usec): min=7, max=105, avg=12.46, stdev= 5.68 00:10:49.365 clat (usec): min=138, max=390, avg=184.72, stdev=29.07 00:10:49.365 lat (usec): min=145, max=409, avg=197.18, stdev=31.21 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:10:49.365 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:10:49.365 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 225], 95.00th=[ 239], 00:10:49.365 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 351], 99.95th=[ 379], 00:10:49.365 | 99.99th=[ 392] 00:10:49.365 bw ( KiB/s): min= 8192, max= 8192, per=40.26%, avg=8192.00, stdev= 0.00, samples=1 00:10:49.365 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:49.365 lat (usec) : 250=68.00%, 500=31.28%, 750=0.64% 00:10:49.365 lat (msec) : 50=0.08% 00:10:49.365 cpu : usr=3.20%, sys=4.60%, ctx=3598, majf=0, minf=1 00:10:49.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.365 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.365 job3: (groupid=0, jobs=1): err= 0: pid=456576: Thu Jul 25 09:25:21 2024 00:10:49.365 read: IOPS=22, BW=91.9KiB/s (94.1kB/s)(92.0KiB/1001msec) 00:10:49.365 slat (nsec): min=8880, max=36356, avg=20460.48, stdev=10522.79 00:10:49.365 clat (usec): min=235, max=41993, avg=37870.88, stdev=10754.58 00:10:49.365 lat (usec): min=252, max=42008, avg=37891.34, stdev=10753.43 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 235], 5.00th=[ 7767], 10.00th=[40633], 20.00th=[41157], 00:10:49.365 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:49.365 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:49.365 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.365 | 99.99th=[42206] 00:10:49.365 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:49.365 slat (nsec): min=6850, max=70832, avg=9197.33, stdev=3468.73 00:10:49.365 clat (usec): min=152, max=840, avg=241.59, stdev=64.82 00:10:49.365 lat (usec): min=161, max=849, avg=250.79, stdev=64.85 00:10:49.365 clat percentiles (usec): 00:10:49.365 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 208], 00:10:49.365 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:10:49.365 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 306], 95.00th=[ 338], 00:10:49.365 | 99.00th=[ 416], 99.50th=[ 709], 99.90th=[ 840], 99.95th=[ 840], 00:10:49.365 | 99.99th=[ 840] 00:10:49.366 bw ( KiB/s): min= 4096, max= 4096, per=20.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.366 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.366 lat (usec) : 250=73.83%, 500=21.12%, 750=0.56%, 1000=0.37% 00:10:49.366 lat (msec) : 10=0.19%, 50=3.93% 00:10:49.366 cpu : usr=0.10%, sys=0.60%, ctx=535, majf=0, minf=2 00:10:49.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.366 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.366 00:10:49.366 Run status group 0 (all jobs): 00:10:49.366 READ: bw=14.2MiB/s (14.9MB/s), 91.9KiB/s-8184KiB/s (94.1kB/s-8380kB/s), io=14.3MiB (15.0MB), run=1001-1007msec 00:10:49.366 WRITE: bw=19.9MiB/s (20.8MB/s), 2034KiB/s-8192KiB/s (2083kB/s-8388kB/s), io=20.0MiB (21.0MB), run=1001-1007msec 00:10:49.366 00:10:49.366 Disk stats (read/write): 00:10:49.366 nvme0n1: ios=77/512, merge=0/0, ticks=906/119, in_queue=1025, util=96.39% 00:10:49.366 nvme0n2: ios=1651/2048, merge=0/0, ticks=762/367, in_queue=1129, util=97.04% 00:10:49.366 nvme0n3: ios=1407/1536, merge=0/0, ticks=1437/281, in_queue=1718, util=97.04% 00:10:49.366 nvme0n4: ios=18/512, merge=0/0, ticks=699/125, in_queue=824, util=89.46% 00:10:49.366 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:49.366 [global] 00:10:49.366 thread=1 00:10:49.366 invalidate=1 00:10:49.366 rw=randwrite 00:10:49.366 time_based=1 00:10:49.366 runtime=1 00:10:49.366 ioengine=libaio 00:10:49.366 direct=1 00:10:49.366 bs=4096 00:10:49.366 iodepth=1 00:10:49.366 norandommap=0 00:10:49.366 numjobs=1 00:10:49.366 00:10:49.366 verify_dump=1 00:10:49.366 verify_backlog=512 00:10:49.366 verify_state_save=0 00:10:49.366 do_verify=1 00:10:49.366 verify=crc32c-intel 00:10:49.366 [job0] 00:10:49.366 filename=/dev/nvme0n1 00:10:49.366 [job1] 00:10:49.366 filename=/dev/nvme0n2 00:10:49.366 [job2] 00:10:49.366 filename=/dev/nvme0n3 00:10:49.366 [job3] 00:10:49.366 filename=/dev/nvme0n4 00:10:49.366 Could not set queue depth (nvme0n1) 00:10:49.366 Could not set queue depth (nvme0n2) 00:10:49.366 Could not set queue depth (nvme0n3) 00:10:49.366 Could not set queue depth (nvme0n4) 00:10:49.366 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.366 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.366 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.366 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.366 fio-3.35 00:10:49.366 Starting 4 threads 00:10:50.740 00:10:50.740 job0: (groupid=0, jobs=1): err= 0: pid=456836: Thu Jul 25 09:25:23 2024 00:10:50.740 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:50.740 slat (nsec): min=6815, max=54665, avg=12262.60, stdev=5884.14 00:10:50.740 clat (usec): min=169, max=41057, avg=764.86, stdev=4556.99 00:10:50.740 lat (usec): min=177, max=41070, avg=777.12, stdev=4557.79 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:10:50.740 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 255], 00:10:50.740 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 326], 00:10:50.740 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.740 | 99.99th=[41157] 00:10:50.740 write: IOPS=1055, BW=4224KiB/s (4325kB/s)(4228KiB/1001msec); 0 zone resets 00:10:50.740 slat (nsec): min=8732, max=58440, avg=13410.50, stdev=6448.70 00:10:50.740 clat (usec): min=123, max=658, avg=172.08, stdev=40.36 00:10:50.740 lat (usec): min=132, max=669, avg=185.49, stdev=42.36 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 145], 00:10:50.740 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:10:50.740 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 221], 00:10:50.740 | 99.00th=[ 245], 99.50th=[ 351], 99.90th=[ 652], 99.95th=[ 660], 00:10:50.740 | 99.99th=[ 660] 00:10:50.740 bw ( KiB/s): min= 4096, max= 4096, per=40.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.740 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.740 lat (usec) : 250=78.66%, 500=20.23%, 750=0.38%, 1000=0.10% 00:10:50.740 lat (msec) : 50=0.62% 00:10:50.740 cpu : usr=1.50%, sys=4.20%, ctx=2082, majf=0, minf=1 00:10:50.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 issued rwts: total=1024,1057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.740 job1: (groupid=0, jobs=1): err= 0: pid=456837: Thu Jul 25 09:25:23 2024 00:10:50.740 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:10:50.740 slat (nsec): min=7991, max=43644, avg=19863.73, stdev=10310.18 00:10:50.740 clat (usec): min=40862, max=41073, avg=40974.50, stdev=52.74 00:10:50.740 lat (usec): min=40900, max=41089, avg=40994.36, stdev=49.17 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:50.740 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:50.740 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:50.740 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.740 | 99.99th=[41157] 00:10:50.740 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:50.740 slat (nsec): min=7758, max=67219, avg=12768.98, stdev=5945.14 00:10:50.740 clat (usec): min=150, max=479, avg=230.26, stdev=31.51 00:10:50.740 lat (usec): min=159, max=487, avg=243.03, stdev=31.56 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 212], 00:10:50.740 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:10:50.740 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 269], 00:10:50.740 | 99.00th=[ 318], 99.50th=[ 392], 99.90th=[ 478], 99.95th=[ 478], 00:10:50.740 | 99.99th=[ 478] 00:10:50.740 bw ( KiB/s): min= 4096, max= 4096, per=40.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.740 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.740 lat (usec) : 250=77.15%, 500=18.73% 00:10:50.740 lat (msec) : 50=4.12% 00:10:50.740 cpu : usr=0.49%, sys=0.49%, ctx=535, majf=0, minf=1 00:10:50.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.740 job2: (groupid=0, jobs=1): err= 0: pid=456838: Thu Jul 25 09:25:23 2024 00:10:50.740 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(100KiB/1035msec) 00:10:50.740 slat (nsec): min=9794, max=54331, avg=18551.20, stdev=9550.20 00:10:50.740 clat (usec): min=245, max=41088, avg=36050.53, stdev=13464.21 00:10:50.740 lat (usec): min=260, max=41104, avg=36069.08, stdev=13460.25 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 247], 5.00th=[ 297], 10.00th=[ 441], 20.00th=[40633], 00:10:50.740 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:50.740 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:50.740 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.740 | 99.99th=[41157] 00:10:50.740 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:50.740 slat (nsec): min=7935, max=56816, avg=13207.92, stdev=5956.75 00:10:50.740 clat (usec): min=161, max=468, avg=241.65, stdev=41.48 00:10:50.740 lat (usec): min=170, max=477, avg=254.86, stdev=41.85 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 217], 00:10:50.740 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:10:50.740 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 318], 00:10:50.740 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 469], 99.95th=[ 469], 00:10:50.740 | 99.99th=[ 469] 00:10:50.740 bw ( KiB/s): min= 4096, max= 4096, per=40.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.740 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.740 lat (usec) : 250=70.76%, 500=25.14% 00:10:50.740 lat (msec) : 50=4.10% 00:10:50.740 cpu : usr=0.39%, sys=0.68%, ctx=538, majf=0, minf=2 00:10:50.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.740 job3: (groupid=0, jobs=1): err= 0: pid=456839: Thu Jul 25 09:25:23 2024 00:10:50.740 read: IOPS=372, BW=1490KiB/s (1526kB/s)(1496KiB/1004msec) 00:10:50.740 slat (nsec): min=6157, max=52511, avg=11181.97, stdev=5573.52 00:10:50.740 clat (usec): min=190, max=41218, avg=2316.08, stdev=8953.18 00:10:50.740 lat (usec): min=201, max=41250, avg=2327.26, stdev=8955.24 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:10:50.740 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:10:50.740 | 70.00th=[ 249], 80.00th=[ 285], 90.00th=[ 322], 95.00th=[40633], 00:10:50.740 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.740 | 99.99th=[41157] 00:10:50.740 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:50.740 slat (nsec): min=7419, max=47836, avg=15679.29, stdev=8751.10 00:10:50.740 clat (usec): min=144, max=443, avg=238.11, stdev=41.29 00:10:50.740 lat (usec): min=153, max=483, avg=253.79, stdev=42.08 00:10:50.740 clat percentiles (usec): 00:10:50.740 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:10:50.740 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 237], 00:10:50.740 | 70.00th=[ 245], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 322], 00:10:50.740 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 445], 99.95th=[ 445], 00:10:50.740 | 99.99th=[ 445] 00:10:50.740 bw ( KiB/s): min= 4096, max= 4096, per=40.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.740 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.740 lat (usec) : 250=72.23%, 500=25.17%, 750=0.45% 00:10:50.740 lat (msec) : 50=2.14% 00:10:50.740 cpu : usr=0.70%, sys=1.10%, ctx=887, majf=0, minf=1 00:10:50.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.740 issued rwts: total=374,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.740 00:10:50.740 Run status group 0 (all jobs): 00:10:50.740 READ: bw=5585KiB/s (5719kB/s), 85.5KiB/s-4092KiB/s (87.6kB/s-4190kB/s), io=5780KiB (5919kB), run=1001-1035msec 00:10:50.740 WRITE: bw=9.79MiB/s (10.3MB/s), 1979KiB/s-4224KiB/s (2026kB/s-4325kB/s), io=10.1MiB (10.6MB), run=1001-1035msec 00:10:50.740 00:10:50.740 Disk stats (read/write): 00:10:50.740 nvme0n1: ios=581/1024, merge=0/0, ticks=1435/173, in_queue=1608, util=98.70% 00:10:50.740 nvme0n2: ios=43/512, merge=0/0, ticks=1682/111, in_queue=1793, util=97.97% 00:10:50.740 nvme0n3: ios=65/512, merge=0/0, ticks=1620/125, in_queue=1745, util=98.23% 00:10:50.740 nvme0n4: ios=414/512, merge=0/0, ticks=900/116, in_queue=1016, util=98.63% 00:10:50.740 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:50.740 [global] 00:10:50.740 thread=1 00:10:50.740 invalidate=1 00:10:50.740 rw=write 00:10:50.741 time_based=1 00:10:50.741 runtime=1 00:10:50.741 ioengine=libaio 00:10:50.741 direct=1 00:10:50.741 bs=4096 00:10:50.741 iodepth=128 00:10:50.741 norandommap=0 00:10:50.741 numjobs=1 00:10:50.741 00:10:50.741 verify_dump=1 00:10:50.741 verify_backlog=512 00:10:50.741 verify_state_save=0 00:10:50.741 do_verify=1 00:10:50.741 verify=crc32c-intel 00:10:50.741 [job0] 00:10:50.741 filename=/dev/nvme0n1 00:10:50.741 [job1] 00:10:50.741 filename=/dev/nvme0n2 00:10:50.741 [job2] 00:10:50.741 filename=/dev/nvme0n3 00:10:50.741 [job3] 00:10:50.741 filename=/dev/nvme0n4 00:10:50.741 Could not set queue depth (nvme0n1) 00:10:50.741 Could not set queue depth (nvme0n2) 00:10:50.741 Could not set queue depth (nvme0n3) 00:10:50.741 Could not set queue depth (nvme0n4) 00:10:50.741 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.741 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.741 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.741 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.741 fio-3.35 00:10:50.741 Starting 4 threads 00:10:52.115 00:10:52.115 job0: (groupid=0, jobs=1): err= 0: pid=457063: Thu Jul 25 09:25:24 2024 00:10:52.115 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:52.115 slat (usec): min=2, max=11284, avg=102.56, stdev=627.07 00:10:52.115 clat (usec): min=5880, max=52125, avg=13769.25, stdev=5526.39 00:10:52.115 lat (usec): min=5890, max=52514, avg=13871.81, stdev=5559.73 00:10:52.115 clat percentiles (usec): 00:10:52.115 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10814], 00:10:52.115 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12649], 00:10:52.115 | 70.00th=[13435], 80.00th=[15401], 90.00th=[21627], 95.00th=[27132], 00:10:52.115 | 99.00th=[32900], 99.50th=[33162], 99.90th=[47973], 99.95th=[52167], 00:10:52.115 | 99.99th=[52167] 00:10:52.115 write: IOPS=4546, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1003msec); 0 zone resets 00:10:52.115 slat (usec): min=3, max=22236, avg=118.05, stdev=819.54 00:10:52.115 clat (usec): min=273, max=64512, avg=15482.42, stdev=9378.59 00:10:52.115 lat (usec): min=5510, max=64523, avg=15600.46, stdev=9424.09 00:10:52.115 clat percentiles (usec): 00:10:52.115 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10552], 00:10:52.115 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:10:52.115 | 70.00th=[13566], 80.00th=[20317], 90.00th=[28181], 95.00th=[34866], 00:10:52.115 | 99.00th=[57410], 99.50th=[58983], 99.90th=[64750], 99.95th=[64750], 00:10:52.115 | 99.99th=[64750] 00:10:52.115 bw ( KiB/s): min=15512, max=19944, per=25.44%, avg=17728.00, stdev=3133.90, samples=2 00:10:52.115 iops : min= 3878, max= 4986, avg=4432.00, stdev=783.47, samples=2 00:10:52.115 lat (usec) : 500=0.01% 00:10:52.115 lat (msec) : 10=11.40%, 20=71.96%, 50=15.37%, 100=1.26% 00:10:52.115 cpu : usr=4.59%, sys=7.49%, ctx=424, majf=0, minf=1 00:10:52.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:52.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.115 issued rwts: total=4096,4560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.115 job1: (groupid=0, jobs=1): err= 0: pid=457064: Thu Jul 25 09:25:24 2024 00:10:52.115 read: IOPS=3612, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1004msec) 00:10:52.115 slat (usec): min=2, max=15287, avg=130.48, stdev=821.74 00:10:52.115 clat (usec): min=653, max=50337, avg=16157.06, stdev=8855.31 00:10:52.115 lat (usec): min=1079, max=50360, avg=16287.54, stdev=8900.92 00:10:52.115 clat percentiles (usec): 00:10:52.115 | 1.00th=[ 3556], 5.00th=[ 6390], 10.00th=[ 9503], 20.00th=[10552], 00:10:52.115 | 30.00th=[11076], 40.00th=[11731], 50.00th=[11994], 60.00th=[14091], 00:10:52.115 | 70.00th=[18744], 80.00th=[22152], 90.00th=[28705], 95.00th=[34866], 00:10:52.115 | 99.00th=[48497], 99.50th=[50070], 99.90th=[50070], 99.95th=[50594], 00:10:52.115 | 99.99th=[50594] 00:10:52.115 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:52.115 slat (usec): min=3, max=17528, avg=112.54, stdev=787.82 00:10:52.115 clat (usec): min=305, max=56845, avg=16784.09, stdev=10162.16 00:10:52.115 lat (usec): min=323, max=56876, avg=16896.63, stdev=10217.19 00:10:52.115 clat percentiles (usec): 00:10:52.115 | 1.00th=[ 1029], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10683], 00:10:52.115 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12125], 60.00th=[15664], 00:10:52.115 | 70.00th=[19006], 80.00th=[22152], 90.00th=[30802], 95.00th=[39584], 00:10:52.115 | 99.00th=[54789], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:10:52.115 | 99.99th=[56886] 00:10:52.115 bw ( KiB/s): min=14104, max=17984, per=23.02%, avg=16044.00, stdev=2743.57, samples=2 00:10:52.115 iops : min= 3526, max= 4496, avg=4011.00, stdev=685.89, samples=2 00:10:52.115 lat (usec) : 500=0.04%, 750=0.03%, 1000=0.36% 00:10:52.115 lat (msec) : 2=0.79%, 4=1.71%, 10=12.94%, 20=58.49%, 50=24.07% 00:10:52.115 lat (msec) : 100=1.58% 00:10:52.115 cpu : usr=3.39%, sys=5.88%, ctx=447, majf=0, minf=1 00:10:52.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:52.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.115 issued rwts: total=3627,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.115 job2: (groupid=0, jobs=1): err= 0: pid=457065: Thu Jul 25 09:25:24 2024 00:10:52.115 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:52.115 slat (usec): min=2, max=12433, avg=116.52, stdev=762.12 00:10:52.115 clat (usec): min=6423, max=34302, avg=15349.64, stdev=3969.73 00:10:52.115 lat (usec): min=6430, max=34322, avg=15466.16, stdev=4032.96 00:10:52.115 clat percentiles (usec): 00:10:52.115 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:10:52.115 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14222], 60.00th=[15795], 00:10:52.115 | 70.00th=[16319], 80.00th=[18744], 90.00th=[21890], 95.00th=[23200], 00:10:52.115 | 99.00th=[27395], 99.50th=[27395], 99.90th=[28967], 99.95th=[30802], 00:10:52.115 | 99.99th=[34341] 00:10:52.115 write: IOPS=4216, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1003msec); 0 zone resets 00:10:52.115 slat (usec): min=3, max=25594, avg=114.34, stdev=896.37 00:10:52.116 clat (usec): min=1860, max=47813, avg=15168.67, stdev=4327.22 00:10:52.116 lat (usec): min=6750, max=47831, avg=15283.01, stdev=4405.80 00:10:52.116 clat percentiles (usec): 00:10:52.116 | 1.00th=[ 7635], 5.00th=[10159], 10.00th=[11600], 20.00th=[11863], 00:10:52.116 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13829], 60.00th=[15008], 00:10:52.116 | 70.00th=[17171], 80.00th=[19006], 90.00th=[21627], 95.00th=[22676], 00:10:52.116 | 99.00th=[27395], 99.50th=[27657], 99.90th=[27657], 99.95th=[41681], 00:10:52.116 | 99.99th=[47973] 00:10:52.116 bw ( KiB/s): min=16384, max=16440, per=23.55%, avg=16412.00, stdev=39.60, samples=2 00:10:52.116 iops : min= 4096, max= 4110, avg=4103.00, stdev= 9.90, samples=2 00:10:52.116 lat (msec) : 2=0.01%, 10=3.32%, 20=81.45%, 50=15.22% 00:10:52.116 cpu : usr=4.39%, sys=7.68%, ctx=299, majf=0, minf=1 00:10:52.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:52.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.116 issued rwts: total=4096,4229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.116 job3: (groupid=0, jobs=1): err= 0: pid=457066: Thu Jul 25 09:25:24 2024 00:10:52.116 read: IOPS=4388, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1002msec) 00:10:52.116 slat (usec): min=2, max=13662, avg=106.97, stdev=638.15 00:10:52.116 clat (usec): min=1347, max=32127, avg=14182.16, stdev=3574.99 00:10:52.116 lat (usec): min=1355, max=32137, avg=14289.13, stdev=3579.84 00:10:52.116 clat percentiles (usec): 00:10:52.116 | 1.00th=[ 4948], 5.00th=[10290], 10.00th=[11207], 20.00th=[12649], 00:10:52.116 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:10:52.116 | 70.00th=[14222], 80.00th=[14877], 90.00th=[17171], 95.00th=[22414], 00:10:52.116 | 99.00th=[28443], 99.50th=[29492], 99.90th=[32113], 99.95th=[32113], 00:10:52.116 | 99.99th=[32113] 00:10:52.116 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:52.116 slat (usec): min=3, max=6936, avg=101.59, stdev=415.64 00:10:52.116 clat (usec): min=3930, max=32118, avg=13913.20, stdev=3741.12 00:10:52.116 lat (usec): min=3942, max=32133, avg=14014.80, stdev=3753.33 00:10:52.116 clat percentiles (usec): 00:10:52.116 | 1.00th=[ 6587], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[11994], 00:10:52.116 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:10:52.116 | 70.00th=[13829], 80.00th=[14615], 90.00th=[19268], 95.00th=[22152], 00:10:52.116 | 99.00th=[28443], 99.50th=[29754], 99.90th=[30540], 99.95th=[30540], 00:10:52.116 | 99.99th=[32113] 00:10:52.116 bw ( KiB/s): min=16384, max=20480, per=26.45%, avg=18432.00, stdev=2896.31, samples=2 00:10:52.116 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:52.116 lat (msec) : 2=0.14%, 4=0.07%, 10=5.11%, 20=86.84%, 50=7.84% 00:10:52.116 cpu : usr=4.00%, sys=5.89%, ctx=590, majf=0, minf=1 00:10:52.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:52.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.116 issued rwts: total=4397,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.116 00:10:52.116 Run status group 0 (all jobs): 00:10:52.116 READ: bw=63.1MiB/s (66.2MB/s), 14.1MiB/s-17.1MiB/s (14.8MB/s-18.0MB/s), io=63.3MiB (66.4MB), run=1002-1004msec 00:10:52.116 WRITE: bw=68.1MiB/s (71.4MB/s), 15.9MiB/s-18.0MiB/s (16.7MB/s-18.8MB/s), io=68.3MiB (71.7MB), run=1002-1004msec 00:10:52.116 00:10:52.116 Disk stats (read/write): 00:10:52.116 nvme0n1: ios=3634/3806, merge=0/0, ticks=16389/20053, in_queue=36442, util=83.57% 00:10:52.116 nvme0n2: ios=3091/3584, merge=0/0, ticks=22819/31641, in_queue=54460, util=94.40% 00:10:52.116 nvme0n3: ios=3637/3680, merge=0/0, ticks=29478/31585, in_queue=61063, util=97.08% 00:10:52.116 nvme0n4: ios=3630/3969, merge=0/0, ticks=24247/26812, in_queue=51059, util=98.11% 00:10:52.116 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:52.116 [global] 00:10:52.116 thread=1 00:10:52.116 invalidate=1 00:10:52.116 rw=randwrite 00:10:52.116 time_based=1 00:10:52.116 runtime=1 00:10:52.116 ioengine=libaio 00:10:52.116 direct=1 00:10:52.116 bs=4096 00:10:52.116 iodepth=128 00:10:52.116 norandommap=0 00:10:52.116 numjobs=1 00:10:52.116 00:10:52.116 verify_dump=1 00:10:52.116 verify_backlog=512 00:10:52.116 verify_state_save=0 00:10:52.116 do_verify=1 00:10:52.116 verify=crc32c-intel 00:10:52.116 [job0] 00:10:52.116 filename=/dev/nvme0n1 00:10:52.116 [job1] 00:10:52.116 filename=/dev/nvme0n2 00:10:52.116 [job2] 00:10:52.116 filename=/dev/nvme0n3 00:10:52.116 [job3] 00:10:52.116 filename=/dev/nvme0n4 00:10:52.116 Could not set queue depth (nvme0n1) 00:10:52.116 Could not set queue depth (nvme0n2) 00:10:52.116 Could not set queue depth (nvme0n3) 00:10:52.116 Could not set queue depth (nvme0n4) 00:10:52.374 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.374 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.374 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.374 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.374 fio-3.35 00:10:52.374 Starting 4 threads 00:10:53.749 00:10:53.749 job0: (groupid=0, jobs=1): err= 0: pid=457302: Thu Jul 25 09:25:26 2024 00:10:53.749 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:53.749 slat (usec): min=2, max=35767, avg=144.45, stdev=1191.25 00:10:53.749 clat (usec): min=3929, max=68109, avg=18926.97, stdev=12602.45 00:10:53.749 lat (usec): min=3934, max=70558, avg=19071.42, stdev=12708.06 00:10:53.749 clat percentiles (usec): 00:10:53.749 | 1.00th=[ 4359], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9896], 00:10:53.749 | 30.00th=[10552], 40.00th=[11469], 50.00th=[11994], 60.00th=[17695], 00:10:53.749 | 70.00th=[20579], 80.00th=[26870], 90.00th=[38011], 95.00th=[47449], 00:10:53.749 | 99.00th=[58459], 99.50th=[58459], 99.90th=[61604], 99.95th=[63177], 00:10:53.749 | 99.99th=[67634] 00:10:53.749 write: IOPS=3746, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1003msec); 0 zone resets 00:10:53.749 slat (usec): min=3, max=14386, avg=112.97, stdev=690.28 00:10:53.749 clat (usec): min=184, max=60165, avg=15820.93, stdev=8086.66 00:10:53.749 lat (usec): min=223, max=60177, avg=15933.90, stdev=8143.07 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 750], 5.00th=[ 4555], 10.00th=[ 6718], 20.00th=[10552], 00:10:53.750 | 30.00th=[11338], 40.00th=[11863], 50.00th=[13042], 60.00th=[16450], 00:10:53.750 | 70.00th=[21103], 80.00th=[22676], 90.00th=[26346], 95.00th=[27919], 00:10:53.750 | 99.00th=[44827], 99.50th=[46924], 99.90th=[55837], 99.95th=[55837], 00:10:53.750 | 99.99th=[60031] 00:10:53.750 bw ( KiB/s): min=11112, max=17936, per=22.62%, avg=14524.00, stdev=4825.30, samples=2 00:10:53.750 iops : min= 2778, max= 4484, avg=3631.00, stdev=1206.32, samples=2 00:10:53.750 lat (usec) : 250=0.01%, 500=0.07%, 750=0.42%, 1000=0.94% 00:10:53.750 lat (msec) : 2=0.25%, 4=1.08%, 10=14.19%, 20=51.08%, 50=29.73% 00:10:53.750 lat (msec) : 100=2.23% 00:10:53.750 cpu : usr=3.19%, sys=5.19%, ctx=335, majf=0, minf=11 00:10:53.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:53.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.750 issued rwts: total=3584,3758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.750 job1: (groupid=0, jobs=1): err= 0: pid=457303: Thu Jul 25 09:25:26 2024 00:10:53.750 read: IOPS=4036, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1013msec) 00:10:53.750 slat (usec): min=2, max=28473, avg=127.85, stdev=1013.16 00:10:53.750 clat (usec): min=1058, max=47774, avg=15885.85, stdev=8463.87 00:10:53.750 lat (usec): min=1736, max=50453, avg=16013.70, stdev=8555.34 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 4621], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9896], 00:10:53.750 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11469], 60.00th=[14091], 00:10:53.750 | 70.00th=[19006], 80.00th=[22676], 90.00th=[29492], 95.00th=[33424], 00:10:53.750 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41681], 99.95th=[45351], 00:10:53.750 | 99.99th=[47973] 00:10:53.750 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:10:53.750 slat (usec): min=3, max=16504, avg=109.78, stdev=718.59 00:10:53.750 clat (usec): min=2169, max=63195, avg=15360.68, stdev=9725.83 00:10:53.750 lat (usec): min=2180, max=63201, avg=15470.46, stdev=9780.51 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 3720], 5.00th=[ 7242], 10.00th=[ 9241], 20.00th=[ 9765], 00:10:53.750 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11207], 60.00th=[13829], 00:10:53.750 | 70.00th=[16057], 80.00th=[18482], 90.00th=[25822], 95.00th=[34341], 00:10:53.750 | 99.00th=[58983], 99.50th=[61604], 99.90th=[63177], 99.95th=[63177], 00:10:53.750 | 99.99th=[63177] 00:10:53.750 bw ( KiB/s): min=13376, max=19392, per=25.52%, avg=16384.00, stdev=4253.95, samples=2 00:10:53.750 iops : min= 3344, max= 4848, avg=4096.00, stdev=1063.49, samples=2 00:10:53.750 lat (msec) : 2=0.01%, 4=0.75%, 10=22.61%, 20=55.59%, 50=19.88% 00:10:53.750 lat (msec) : 100=1.16% 00:10:53.750 cpu : usr=3.06%, sys=6.62%, ctx=368, majf=0, minf=17 00:10:53.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:53.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.750 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.750 job2: (groupid=0, jobs=1): err= 0: pid=457304: Thu Jul 25 09:25:26 2024 00:10:53.750 read: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1006msec) 00:10:53.750 slat (usec): min=3, max=12020, avg=104.66, stdev=610.64 00:10:53.750 clat (usec): min=4771, max=33882, avg=13901.78, stdev=3367.19 00:10:53.750 lat (usec): min=5461, max=35633, avg=14006.44, stdev=3401.50 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:10:53.750 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13304], 60.00th=[13829], 00:10:53.750 | 70.00th=[14353], 80.00th=[16057], 90.00th=[17433], 95.00th=[21365], 00:10:53.750 | 99.00th=[27395], 99.50th=[27395], 99.90th=[32900], 99.95th=[32900], 00:10:53.750 | 99.99th=[33817] 00:10:53.750 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:10:53.750 slat (usec): min=4, max=21208, avg=111.41, stdev=746.59 00:10:53.750 clat (usec): min=7073, max=46304, avg=14980.19, stdev=6298.76 00:10:53.750 lat (usec): min=7090, max=46349, avg=15091.60, stdev=6350.69 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11600], 00:10:53.750 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13829], 00:10:53.750 | 70.00th=[14746], 80.00th=[16712], 90.00th=[18744], 95.00th=[33162], 00:10:53.750 | 99.00th=[40109], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:10:53.750 | 99.99th=[46400] 00:10:53.750 bw ( KiB/s): min=16384, max=19840, per=28.21%, avg=18112.00, stdev=2443.76, samples=2 00:10:53.750 iops : min= 4096, max= 4960, avg=4528.00, stdev=610.94, samples=2 00:10:53.750 lat (msec) : 10=4.77%, 20=88.26%, 50=6.97% 00:10:53.750 cpu : usr=4.98%, sys=11.24%, ctx=329, majf=0, minf=11 00:10:53.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:53.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.750 issued rwts: total=4143,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.750 job3: (groupid=0, jobs=1): err= 0: pid=457305: Thu Jul 25 09:25:26 2024 00:10:53.750 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:10:53.750 slat (usec): min=2, max=38128, avg=142.76, stdev=1193.99 00:10:53.750 clat (usec): min=4849, max=85222, avg=17824.94, stdev=10383.81 00:10:53.750 lat (usec): min=4856, max=85236, avg=17967.70, stdev=10482.92 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[12649], 00:10:53.750 | 30.00th=[13173], 40.00th=[14222], 50.00th=[15139], 60.00th=[15926], 00:10:53.750 | 70.00th=[16909], 80.00th=[20317], 90.00th=[26346], 95.00th=[34866], 00:10:53.750 | 99.00th=[66323], 99.50th=[66847], 99.90th=[66847], 99.95th=[72877], 00:10:53.750 | 99.99th=[85459] 00:10:53.750 write: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1014msec); 0 zone resets 00:10:53.750 slat (usec): min=3, max=12287, avg=108.67, stdev=700.52 00:10:53.750 clat (usec): min=2800, max=63862, avg=16856.20, stdev=7165.13 00:10:53.750 lat (usec): min=2815, max=63867, avg=16964.87, stdev=7210.21 00:10:53.750 clat percentiles (usec): 00:10:53.750 | 1.00th=[ 5866], 5.00th=[ 7635], 10.00th=[ 9634], 20.00th=[11338], 00:10:53.750 | 30.00th=[12518], 40.00th=[13435], 50.00th=[15270], 60.00th=[17171], 00:10:53.750 | 70.00th=[20317], 80.00th=[22676], 90.00th=[25822], 95.00th=[29492], 00:10:53.750 | 99.00th=[41157], 99.50th=[49021], 99.90th=[58459], 99.95th=[58459], 00:10:53.750 | 99.99th=[63701] 00:10:53.750 bw ( KiB/s): min=10952, max=18544, per=22.97%, avg=14748.00, stdev=5368.35, samples=2 00:10:53.750 iops : min= 2738, max= 4636, avg=3687.00, stdev=1342.09, samples=2 00:10:53.750 lat (msec) : 4=0.08%, 10=9.16%, 20=65.17%, 50=23.80%, 100=1.78% 00:10:53.750 cpu : usr=2.76%, sys=7.31%, ctx=287, majf=0, minf=13 00:10:53.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:53.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.750 issued rwts: total=3584,3815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.750 00:10:53.750 Run status group 0 (all jobs): 00:10:53.750 READ: bw=59.3MiB/s (62.2MB/s), 13.8MiB/s-16.1MiB/s (14.5MB/s-16.9MB/s), io=60.2MiB (63.1MB), run=1003-1014msec 00:10:53.750 WRITE: bw=62.7MiB/s (65.8MB/s), 14.6MiB/s-17.9MiB/s (15.3MB/s-18.8MB/s), io=63.6MiB (66.7MB), run=1003-1014msec 00:10:53.750 00:10:53.750 Disk stats (read/write): 00:10:53.750 nvme0n1: ios=2777/3072, merge=0/0, ticks=30230/22743, in_queue=52973, util=90.68% 00:10:53.750 nvme0n2: ios=3472/3584, merge=0/0, ticks=27296/23228, in_queue=50524, util=90.65% 00:10:53.750 nvme0n3: ios=3637/3652, merge=0/0, ticks=22620/24377, in_queue=46997, util=98.64% 00:10:53.750 nvme0n4: ios=3188/3584, merge=0/0, ticks=35362/41082, in_queue=76444, util=98.53% 00:10:53.750 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:53.750 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=457437 00:10:53.750 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:53.750 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:53.750 [global] 00:10:53.750 thread=1 00:10:53.750 invalidate=1 00:10:53.750 rw=read 00:10:53.750 time_based=1 00:10:53.750 runtime=10 00:10:53.750 ioengine=libaio 00:10:53.750 direct=1 00:10:53.750 bs=4096 00:10:53.750 iodepth=1 00:10:53.750 norandommap=1 00:10:53.750 numjobs=1 00:10:53.750 00:10:53.750 [job0] 00:10:53.750 filename=/dev/nvme0n1 00:10:53.750 [job1] 00:10:53.750 filename=/dev/nvme0n2 00:10:53.750 [job2] 00:10:53.750 filename=/dev/nvme0n3 00:10:53.750 [job3] 00:10:53.750 filename=/dev/nvme0n4 00:10:53.750 Could not set queue depth (nvme0n1) 00:10:53.750 Could not set queue depth (nvme0n2) 00:10:53.750 Could not set queue depth (nvme0n3) 00:10:53.750 Could not set queue depth (nvme0n4) 00:10:53.750 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.750 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.750 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.750 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.750 fio-3.35 00:10:53.750 Starting 4 threads 00:10:57.029 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:57.029 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:57.029 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4706304, buflen=4096 00:10:57.029 fio: pid=457652, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.029 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.029 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:57.029 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=24186880, buflen=4096 00:10:57.029 fio: pid=457651, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.288 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2359296, buflen=4096 00:10:57.288 fio: pid=457649, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.288 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.288 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:57.546 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.546 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:57.546 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1069056, buflen=4096 00:10:57.546 fio: pid=457650, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.546 00:10:57.546 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=457649: Thu Jul 25 09:25:30 2024 00:10:57.546 read: IOPS=170, BW=679KiB/s (695kB/s)(2304KiB/3393msec) 00:10:57.546 slat (usec): min=4, max=23698, avg=49.82, stdev=986.25 00:10:57.546 clat (usec): min=168, max=41169, avg=5820.27, stdev=14001.70 00:10:57.546 lat (usec): min=172, max=64771, avg=5870.15, stdev=14143.26 00:10:57.546 clat percentiles (usec): 00:10:57.546 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:10:57.546 | 30.00th=[ 215], 40.00th=[ 227], 50.00th=[ 247], 60.00th=[ 265], 00:10:57.546 | 70.00th=[ 281], 80.00th=[ 314], 90.00th=[41157], 95.00th=[41157], 00:10:57.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:57.546 | 99.99th=[41157] 00:10:57.546 bw ( KiB/s): min= 96, max= 752, per=2.45%, avg=209.33, stdev=265.93, samples=6 00:10:57.546 iops : min= 24, max= 188, avg=52.33, stdev=66.48, samples=6 00:10:57.546 lat (usec) : 250=53.38%, 500=32.24%, 750=0.52% 00:10:57.546 lat (msec) : 50=13.69% 00:10:57.546 cpu : usr=0.06%, sys=0.18%, ctx=579, majf=0, minf=1 00:10:57.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.546 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=457650: Thu Jul 25 09:25:30 2024 00:10:57.546 read: IOPS=70, BW=282KiB/s (289kB/s)(1044KiB/3696msec) 00:10:57.546 slat (usec): min=5, max=10875, avg=123.16, stdev=1048.48 00:10:57.546 clat (usec): min=212, max=43776, avg=13862.97, stdev=19233.13 00:10:57.546 lat (usec): min=218, max=52024, avg=13986.47, stdev=19357.17 00:10:57.546 clat percentiles (usec): 00:10:57.546 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 262], 00:10:57.546 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 371], 00:10:57.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:57.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:10:57.546 | 99.99th=[43779] 00:10:57.546 bw ( KiB/s): min= 96, max= 1239, per=3.06%, avg=261.57, stdev=431.02, samples=7 00:10:57.546 iops : min= 24, max= 309, avg=65.29, stdev=107.47, samples=7 00:10:57.546 lat (usec) : 250=4.96%, 500=60.31%, 750=1.15% 00:10:57.546 lat (msec) : 50=33.21% 00:10:57.546 cpu : usr=0.03%, sys=0.11%, ctx=269, majf=0, minf=1 00:10:57.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 issued rwts: total=262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.546 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=457651: Thu Jul 25 09:25:30 2024 00:10:57.546 read: IOPS=1892, BW=7571KiB/s (7752kB/s)(23.1MiB/3120msec) 00:10:57.546 slat (nsec): min=6031, max=45903, avg=12564.37, stdev=4804.79 00:10:57.546 clat (usec): min=172, max=41353, avg=508.58, stdev=3298.57 00:10:57.546 lat (usec): min=179, max=41369, avg=521.14, stdev=3298.91 00:10:57.546 clat percentiles (usec): 00:10:57.546 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:10:57.546 | 30.00th=[ 215], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:10:57.546 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:10:57.546 | 99.00th=[ 338], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:57.546 | 99.99th=[41157] 00:10:57.546 bw ( KiB/s): min= 112, max=18208, per=92.14%, avg=7869.33, stdev=7429.97, samples=6 00:10:57.546 iops : min= 28, max= 4552, avg=1967.33, stdev=1857.49, samples=6 00:10:57.546 lat (usec) : 250=52.95%, 500=46.31%, 750=0.07% 00:10:57.546 lat (msec) : 50=0.66% 00:10:57.546 cpu : usr=1.73%, sys=3.66%, ctx=5906, majf=0, minf=1 00:10:57.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 issued rwts: total=5906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.546 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=457652: Thu Jul 25 09:25:30 2024 00:10:57.546 read: IOPS=398, BW=1592KiB/s (1630kB/s)(4596KiB/2887msec) 00:10:57.546 slat (nsec): min=5693, max=38096, avg=7191.18, stdev=2788.99 00:10:57.546 clat (usec): min=180, max=44019, avg=2494.29, stdev=9356.98 00:10:57.546 lat (usec): min=186, max=44040, avg=2501.47, stdev=9359.01 00:10:57.546 clat percentiles (usec): 00:10:57.546 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:10:57.546 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 223], 00:10:57.546 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 277], 95.00th=[40633], 00:10:57.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[43779], 00:10:57.546 | 99.99th=[43779] 00:10:57.546 bw ( KiB/s): min= 96, max= 8216, per=21.31%, avg=1820.80, stdev=3581.14, samples=5 00:10:57.546 iops : min= 24, max= 2054, avg=455.20, stdev=895.28, samples=5 00:10:57.546 lat (usec) : 250=83.57%, 500=10.70% 00:10:57.546 lat (msec) : 4=0.09%, 50=5.57% 00:10:57.546 cpu : usr=0.14%, sys=0.49%, ctx=1151, majf=0, minf=1 00:10:57.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.546 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.546 00:10:57.546 Run status group 0 (all jobs): 00:10:57.546 READ: bw=8540KiB/s (8745kB/s), 282KiB/s-7571KiB/s (289kB/s-7752kB/s), io=30.8MiB (32.3MB), run=2887-3696msec 00:10:57.546 00:10:57.546 Disk stats (read/write): 00:10:57.546 nvme0n1: ios=422/0, merge=0/0, ticks=3319/0, in_queue=3319, util=95.22% 00:10:57.546 nvme0n2: ios=296/0, merge=0/0, ticks=4448/0, in_queue=4448, util=98.50% 00:10:57.546 nvme0n3: ios=5905/0, merge=0/0, ticks=2989/0, in_queue=2989, util=96.76% 00:10:57.546 nvme0n4: ios=1191/0, merge=0/0, ticks=3911/0, in_queue=3911, util=98.98% 00:10:57.804 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.804 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:58.062 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.062 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:58.320 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.320 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:58.578 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.578 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:58.835 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:58.835 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 457437 00:10:58.835 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:58.835 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:59.092 nvmf hotplug test: fio failed as expected 00:10:59.092 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.350 rmmod nvme_tcp 00:10:59.350 rmmod nvme_fabrics 00:10:59.350 rmmod nvme_keyring 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 455397 ']' 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 455397 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 455397 ']' 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 455397 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 455397 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 455397' 00:10:59.350 killing process with pid 455397 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 455397 00:10:59.350 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 455397 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.609 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.141 00:11:02.141 real 0m23.985s 00:11:02.141 user 1m24.730s 00:11:02.141 sys 0m6.375s 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.141 ************************************ 00:11:02.141 END TEST nvmf_fio_target 00:11:02.141 ************************************ 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.141 ************************************ 00:11:02.141 START TEST nvmf_bdevio 00:11:02.141 ************************************ 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:02.141 * Looking for test storage... 00:11:02.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.141 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:04.041 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:04.041 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:04.041 Found net devices under 0000:82:00.0: cvl_0_0 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.041 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:04.042 Found net devices under 0000:82:00.1: cvl_0_1 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:11:04.042 00:11:04.042 --- 10.0.0.2 ping statistics --- 00:11:04.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.042 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:04.042 00:11:04.042 --- 10.0.0.1 ping statistics --- 00:11:04.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.042 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=460280 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 460280 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 460280 ']' 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.042 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.042 [2024-07-25 09:25:36.573658] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:04.042 [2024-07-25 09:25:36.573760] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.042 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.042 [2024-07-25 09:25:36.642712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.042 [2024-07-25 09:25:36.765522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.042 [2024-07-25 09:25:36.765575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.042 [2024-07-25 09:25:36.765592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.042 [2024-07-25 09:25:36.765605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.042 [2024-07-25 09:25:36.765617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.042 [2024-07-25 09:25:36.765692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:04.042 [2024-07-25 09:25:36.765747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:04.042 [2024-07-25 09:25:36.765800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:04.042 [2024-07-25 09:25:36.765804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.974 [2024-07-25 09:25:37.589815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.974 Malloc0 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.974 [2024-07-25 09:25:37.641988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:04.974 { 00:11:04.974 "params": { 00:11:04.974 "name": "Nvme$subsystem", 00:11:04.974 "trtype": "$TEST_TRANSPORT", 00:11:04.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.974 "adrfam": "ipv4", 00:11:04.974 "trsvcid": "$NVMF_PORT", 00:11:04.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.974 "hdgst": ${hdgst:-false}, 00:11:04.974 "ddgst": ${ddgst:-false} 00:11:04.974 }, 00:11:04.974 "method": "bdev_nvme_attach_controller" 00:11:04.974 } 00:11:04.974 EOF 00:11:04.974 )") 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:04.974 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:04.974 "params": { 00:11:04.974 "name": "Nvme1", 00:11:04.974 "trtype": "tcp", 00:11:04.974 "traddr": "10.0.0.2", 00:11:04.974 "adrfam": "ipv4", 00:11:04.974 "trsvcid": "4420", 00:11:04.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.974 "hdgst": false, 00:11:04.974 "ddgst": false 00:11:04.974 }, 00:11:04.974 "method": "bdev_nvme_attach_controller" 00:11:04.974 }' 00:11:04.974 [2024-07-25 09:25:37.693708] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:04.974 [2024-07-25 09:25:37.693800] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460435 ] 00:11:05.232 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.232 [2024-07-25 09:25:37.757169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.232 [2024-07-25 09:25:37.872774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.232 [2024-07-25 09:25:37.872822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.232 [2024-07-25 09:25:37.872825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.489 I/O targets: 00:11:05.489 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:05.489 00:11:05.489 00:11:05.489 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.489 http://cunit.sourceforge.net/ 00:11:05.489 00:11:05.489 00:11:05.489 Suite: bdevio tests on: Nvme1n1 00:11:05.489 Test: blockdev write read block ...passed 00:11:05.747 Test: blockdev write zeroes read block ...passed 00:11:05.747 Test: blockdev write zeroes read no split ...passed 00:11:05.747 Test: blockdev write zeroes read split ...passed 00:11:05.747 Test: blockdev write zeroes read split partial ...passed 00:11:05.747 Test: blockdev reset ...[2024-07-25 09:25:38.372210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:05.747 [2024-07-25 09:25:38.372310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504580 (9): Bad file descriptor 00:11:05.747 [2024-07-25 09:25:38.474564] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:05.747 passed 00:11:05.747 Test: blockdev write read 8 blocks ...passed 00:11:06.005 Test: blockdev write read size > 128k ...passed 00:11:06.005 Test: blockdev write read invalid size ...passed 00:11:06.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.005 Test: blockdev write read max offset ...passed 00:11:06.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.005 Test: blockdev writev readv 8 blocks ...passed 00:11:06.005 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.005 Test: blockdev writev readv block ...passed 00:11:06.005 Test: blockdev writev readv size > 128k ...passed 00:11:06.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.005 Test: blockdev comparev and writev ...[2024-07-25 09:25:38.686594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.686633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.686659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.686677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.687058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.687083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.687105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.687128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.687479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.687503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.687525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.687541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.687908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.687932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:06.005 [2024-07-25 09:25:38.687954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.005 [2024-07-25 09:25:38.687970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:06.005 passed 00:11:06.263 Test: blockdev nvme passthru rw ...passed 00:11:06.263 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:25:38.769648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.263 [2024-07-25 09:25:38.769678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:06.263 [2024-07-25 09:25:38.769819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.263 [2024-07-25 09:25:38.769841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:06.263 [2024-07-25 09:25:38.769978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.263 [2024-07-25 09:25:38.770000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:06.263 [2024-07-25 09:25:38.770141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.263 [2024-07-25 09:25:38.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:06.263 passed 00:11:06.263 Test: blockdev nvme admin passthru ...passed 00:11:06.263 Test: blockdev copy ...passed 00:11:06.263 00:11:06.263 Run Summary: Type Total Ran Passed Failed Inactive 00:11:06.263 suites 1 1 n/a 0 0 00:11:06.263 tests 23 23 23 0 0 00:11:06.263 asserts 152 152 152 0 n/a 00:11:06.263 00:11:06.263 Elapsed time = 1.295 seconds 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.520 rmmod nvme_tcp 00:11:06.520 rmmod nvme_fabrics 00:11:06.520 rmmod nvme_keyring 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 460280 ']' 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 460280 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 460280 ']' 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 460280 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460280 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460280' 00:11:06.520 killing process with pid 460280 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 460280 00:11:06.520 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 460280 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.778 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.310 00:11:09.310 real 0m7.200s 00:11:09.310 user 0m14.150s 00:11:09.310 sys 0m2.132s 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 ************************************ 00:11:09.310 END TEST nvmf_bdevio 00:11:09.310 ************************************ 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:09.310 00:11:09.310 real 3m55.038s 00:11:09.310 user 10m14.231s 00:11:09.310 sys 1m7.883s 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 ************************************ 00:11:09.310 END TEST nvmf_target_core 00:11:09.310 ************************************ 00:11:09.310 09:25:41 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.310 09:25:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.310 09:25:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.310 09:25:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 ************************************ 00:11:09.310 START TEST nvmf_target_extra 00:11:09.310 ************************************ 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.310 * Looking for test storage... 00:11:09.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.310 09:25:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.311 ************************************ 00:11:09.311 START TEST nvmf_example 00:11:09.311 ************************************ 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:09.311 * Looking for test storage... 00:11:09.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.311 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.214 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:11.215 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:11.215 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:11.215 Found net devices under 0000:82:00.0: cvl_0_0 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:11.215 Found net devices under 0000:82:00.1: cvl_0_1 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:11.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:11:11.215 00:11:11.215 --- 10.0.0.2 ping statistics --- 00:11:11.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.215 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:11.215 00:11:11.215 --- 10.0.0.1 ping statistics --- 00:11:11.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.215 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=462564 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 462564 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 462564 ']' 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.215 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.216 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.149 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:12.408 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:12.408 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.376 Initializing NVMe Controllers 00:11:22.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:22.376 Initialization complete. Launching workers. 00:11:22.376 ======================================================== 00:11:22.376 Latency(us) 00:11:22.376 Device Information : IOPS MiB/s Average min max 00:11:22.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14941.20 58.36 4283.75 726.09 15396.51 00:11:22.376 ======================================================== 00:11:22.376 Total : 14941.20 58.36 4283.75 726.09 15396.51 00:11:22.376 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.376 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.376 rmmod nvme_tcp 00:11:22.634 rmmod nvme_fabrics 00:11:22.634 rmmod nvme_keyring 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 462564 ']' 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 462564 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 462564 ']' 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 462564 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 462564 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 462564' 00:11:22.634 killing process with pid 462564 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 462564 00:11:22.634 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 462564 00:11:22.893 nvmf threads initialize successfully 00:11:22.893 bdev subsystem init successfully 00:11:22.893 created a nvmf target service 00:11:22.893 create targets's poll groups done 00:11:22.893 all subsystems of target started 00:11:22.893 nvmf target is running 00:11:22.893 all subsystems of target stopped 00:11:22.893 destroy targets's poll groups done 00:11:22.893 destroyed the nvmf target service 00:11:22.893 bdev subsystem finish successfully 00:11:22.893 nvmf threads destroy successfully 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.893 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.796 00:11:24.796 real 0m15.833s 00:11:24.796 user 0m44.786s 00:11:24.796 sys 0m3.380s 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.796 ************************************ 00:11:24.796 END TEST nvmf_example 00:11:24.796 ************************************ 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.796 09:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.057 ************************************ 00:11:25.057 START TEST nvmf_filesystem 00:11:25.057 ************************************ 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:25.057 * Looking for test storage... 00:11:25.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:25.057 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:25.058 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:25.058 #define SPDK_CONFIG_H 00:11:25.058 #define SPDK_CONFIG_APPS 1 00:11:25.058 #define SPDK_CONFIG_ARCH native 00:11:25.058 #undef SPDK_CONFIG_ASAN 00:11:25.058 #undef SPDK_CONFIG_AVAHI 00:11:25.058 #undef SPDK_CONFIG_CET 00:11:25.058 #define SPDK_CONFIG_COVERAGE 1 00:11:25.058 #define SPDK_CONFIG_CROSS_PREFIX 00:11:25.058 #undef SPDK_CONFIG_CRYPTO 00:11:25.058 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:25.058 #undef SPDK_CONFIG_CUSTOMOCF 00:11:25.058 #undef SPDK_CONFIG_DAOS 00:11:25.058 #define SPDK_CONFIG_DAOS_DIR 00:11:25.058 #define SPDK_CONFIG_DEBUG 1 00:11:25.058 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:25.058 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:25.058 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:25.058 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:25.058 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:25.058 #undef SPDK_CONFIG_DPDK_UADK 00:11:25.058 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:25.058 #define SPDK_CONFIG_EXAMPLES 1 00:11:25.058 #undef SPDK_CONFIG_FC 00:11:25.058 #define SPDK_CONFIG_FC_PATH 00:11:25.058 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:25.058 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:25.058 #undef SPDK_CONFIG_FUSE 00:11:25.058 #undef SPDK_CONFIG_FUZZER 00:11:25.058 #define SPDK_CONFIG_FUZZER_LIB 00:11:25.058 #undef SPDK_CONFIG_GOLANG 00:11:25.058 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:25.058 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:25.058 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:25.058 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:25.058 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:25.058 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:25.059 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:25.059 #define SPDK_CONFIG_IDXD 1 00:11:25.059 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:25.059 #undef SPDK_CONFIG_IPSEC_MB 00:11:25.059 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:25.059 #define SPDK_CONFIG_ISAL 1 00:11:25.059 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:25.059 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:25.059 #define SPDK_CONFIG_LIBDIR 00:11:25.059 #undef SPDK_CONFIG_LTO 00:11:25.059 #define SPDK_CONFIG_MAX_LCORES 128 00:11:25.059 #define SPDK_CONFIG_NVME_CUSE 1 00:11:25.059 #undef SPDK_CONFIG_OCF 00:11:25.059 #define SPDK_CONFIG_OCF_PATH 00:11:25.059 #define SPDK_CONFIG_OPENSSL_PATH 00:11:25.059 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:25.059 #define SPDK_CONFIG_PGO_DIR 00:11:25.059 #undef SPDK_CONFIG_PGO_USE 00:11:25.059 #define SPDK_CONFIG_PREFIX /usr/local 00:11:25.059 #undef SPDK_CONFIG_RAID5F 00:11:25.059 #undef SPDK_CONFIG_RBD 00:11:25.059 #define SPDK_CONFIG_RDMA 1 00:11:25.059 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:25.059 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:25.059 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:25.059 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:25.059 #define SPDK_CONFIG_SHARED 1 00:11:25.059 #undef SPDK_CONFIG_SMA 00:11:25.059 #define SPDK_CONFIG_TESTS 1 00:11:25.059 #undef SPDK_CONFIG_TSAN 00:11:25.059 #define SPDK_CONFIG_UBLK 1 00:11:25.059 #define SPDK_CONFIG_UBSAN 1 00:11:25.059 #undef SPDK_CONFIG_UNIT_TESTS 00:11:25.059 #undef SPDK_CONFIG_URING 00:11:25.059 #define SPDK_CONFIG_URING_PATH 00:11:25.059 #undef SPDK_CONFIG_URING_ZNS 00:11:25.059 #undef SPDK_CONFIG_USDT 00:11:25.059 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:25.059 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:25.059 #define SPDK_CONFIG_VFIO_USER 1 00:11:25.059 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:25.059 #define SPDK_CONFIG_VHOST 1 00:11:25.059 #define SPDK_CONFIG_VIRTIO 1 00:11:25.059 #undef SPDK_CONFIG_VTUNE 00:11:25.059 #define SPDK_CONFIG_VTUNE_DIR 00:11:25.059 #define SPDK_CONFIG_WERROR 1 00:11:25.059 #define SPDK_CONFIG_WPDK_DIR 00:11:25.059 #undef SPDK_CONFIG_XNVME 00:11:25.059 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:25.059 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:25.060 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:11:25.061 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 464261 ]] 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 464261 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.rS2s6f 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rS2s6f/tests/target /tmp/spdk.rS2s6f 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=947712000 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4336717824 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56577908736 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994717184 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5416808448 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30987436032 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376543232 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22401024 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996934656 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=425984 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:11:25.062 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:11:25.063 * Looking for test storage... 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56577908736 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7631400960 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:25.063 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.064 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.657 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:27.658 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:27.658 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:27.658 Found net devices under 0000:82:00.0: cvl_0_0 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:27.658 Found net devices under 0000:82:00.1: cvl_0_1 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:11:27.658 00:11:27.658 --- 10.0.0.2 ping statistics --- 00:11:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.658 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:27.658 00:11:27.658 --- 10.0.0.1 ping statistics --- 00:11:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.658 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.658 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.658 ************************************ 00:11:27.658 START TEST nvmf_filesystem_no_in_capsule 00:11:27.658 ************************************ 00:11:27.658 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:11:27.658 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:27.658 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=465886 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 465886 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 465886 ']' 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.659 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.659 [2024-07-25 09:26:00.065764] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:27.659 [2024-07-25 09:26:00.065844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.659 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.659 [2024-07-25 09:26:00.134265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.659 [2024-07-25 09:26:00.256046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.659 [2024-07-25 09:26:00.256104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.659 [2024-07-25 09:26:00.256121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.659 [2024-07-25 09:26:00.256135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.659 [2024-07-25 09:26:00.256147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.659 [2024-07-25 09:26:00.256236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.659 [2024-07-25 09:26:00.256292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.659 [2024-07-25 09:26:00.256618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.659 [2024-07-25 09:26:00.256624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 [2024-07-25 09:26:00.422896] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 [2024-07-25 09:26:00.610453] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.917 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:27.917 { 00:11:27.917 "name": "Malloc1", 00:11:27.917 "aliases": [ 00:11:27.917 "62fd90c5-e3e6-4ace-96a7-cf214a734e68" 00:11:27.917 ], 00:11:27.917 "product_name": "Malloc disk", 00:11:27.917 "block_size": 512, 00:11:27.918 "num_blocks": 1048576, 00:11:27.918 "uuid": "62fd90c5-e3e6-4ace-96a7-cf214a734e68", 00:11:27.918 "assigned_rate_limits": { 00:11:27.918 "rw_ios_per_sec": 0, 00:11:27.918 "rw_mbytes_per_sec": 0, 00:11:27.918 "r_mbytes_per_sec": 0, 00:11:27.918 "w_mbytes_per_sec": 0 00:11:27.918 }, 00:11:27.918 "claimed": true, 00:11:27.918 "claim_type": "exclusive_write", 00:11:27.918 "zoned": false, 00:11:27.918 "supported_io_types": { 00:11:27.918 "read": true, 00:11:27.918 "write": true, 00:11:27.918 "unmap": true, 00:11:27.918 "flush": true, 00:11:27.918 "reset": true, 00:11:27.918 "nvme_admin": false, 00:11:27.918 "nvme_io": false, 00:11:27.918 "nvme_io_md": false, 00:11:27.918 "write_zeroes": true, 00:11:27.918 "zcopy": true, 00:11:27.918 "get_zone_info": false, 00:11:27.918 "zone_management": false, 00:11:27.918 "zone_append": false, 00:11:27.918 "compare": false, 00:11:27.918 "compare_and_write": false, 00:11:27.918 "abort": true, 00:11:27.918 "seek_hole": false, 00:11:27.918 "seek_data": false, 00:11:27.918 "copy": true, 00:11:27.918 "nvme_iov_md": false 00:11:27.918 }, 00:11:27.918 "memory_domains": [ 00:11:27.918 { 00:11:27.918 "dma_device_id": "system", 00:11:27.918 "dma_device_type": 1 00:11:27.918 }, 00:11:27.918 { 00:11:27.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.918 "dma_device_type": 2 00:11:27.918 } 00:11:27.918 ], 00:11:27.918 "driver_specific": {} 00:11:27.918 } 00:11:27.918 ]' 00:11:27.918 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:28.175 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.739 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.739 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:28.739 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.739 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:28.739 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:30.635 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:30.635 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:30.635 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:30.892 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:31.149 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:32.081 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.013 ************************************ 00:11:33.013 START TEST filesystem_ext4 00:11:33.013 ************************************ 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:33.013 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:33.014 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:33.014 mke2fs 1.46.5 (30-Dec-2021) 00:11:33.014 Discarding device blocks: 0/522240 done 00:11:33.014 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:33.014 Filesystem UUID: d1dbdd71-d576-4a80-b7df-48cc5976bda9 00:11:33.014 Superblock backups stored on blocks: 00:11:33.014 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:33.014 00:11:33.014 Allocating group tables: 0/64 done 00:11:33.014 Writing inode tables: 0/64 done 00:11:33.271 Creating journal (8192 blocks): done 00:11:33.271 Writing superblocks and filesystem accounting information: 0/64 done 00:11:33.271 00:11:33.271 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:33.271 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 465886 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.203 00:11:34.203 real 0m1.286s 00:11:34.203 user 0m0.018s 00:11:34.203 sys 0m0.054s 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:34.203 ************************************ 00:11:34.203 END TEST filesystem_ext4 00:11:34.203 ************************************ 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.203 ************************************ 00:11:34.203 START TEST filesystem_btrfs 00:11:34.203 ************************************ 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:34.203 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:34.461 btrfs-progs v6.6.2 00:11:34.461 See https://btrfs.readthedocs.io for more information. 00:11:34.461 00:11:34.461 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:34.461 NOTE: several default settings have changed in version 5.15, please make sure 00:11:34.461 this does not affect your deployments: 00:11:34.461 - DUP for metadata (-m dup) 00:11:34.461 - enabled no-holes (-O no-holes) 00:11:34.461 - enabled free-space-tree (-R free-space-tree) 00:11:34.461 00:11:34.461 Label: (null) 00:11:34.461 UUID: 0c3d6877-bf15-426e-869c-b6bbf1d0e632 00:11:34.461 Node size: 16384 00:11:34.461 Sector size: 4096 00:11:34.461 Filesystem size: 510.00MiB 00:11:34.461 Block group profiles: 00:11:34.461 Data: single 8.00MiB 00:11:34.461 Metadata: DUP 32.00MiB 00:11:34.461 System: DUP 8.00MiB 00:11:34.461 SSD detected: yes 00:11:34.461 Zoned device: no 00:11:34.461 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:34.461 Runtime features: free-space-tree 00:11:34.461 Checksum: crc32c 00:11:34.461 Number of devices: 1 00:11:34.461 Devices: 00:11:34.461 ID SIZE PATH 00:11:34.461 1 510.00MiB /dev/nvme0n1p1 00:11:34.461 00:11:34.461 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:34.461 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 465886 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.395 00:11:35.395 real 0m1.018s 00:11:35.395 user 0m0.020s 00:11:35.395 sys 0m0.144s 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 ************************************ 00:11:35.395 END TEST filesystem_btrfs 00:11:35.395 ************************************ 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 ************************************ 00:11:35.395 START TEST filesystem_xfs 00:11:35.395 ************************************ 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:35.395 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:35.395 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:35.395 = sectsz=512 attr=2, projid32bit=1 00:11:35.395 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:35.395 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:35.395 data = bsize=4096 blocks=130560, imaxpct=25 00:11:35.395 = sunit=0 swidth=0 blks 00:11:35.395 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:35.395 log =internal log bsize=4096 blocks=16384, version=2 00:11:35.395 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:35.395 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:36.768 Discarding blocks...Done. 00:11:36.768 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:36.768 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.047 00:11:40.047 real 0m4.113s 00:11:40.047 user 0m0.012s 00:11:40.047 sys 0m0.091s 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.047 ************************************ 00:11:40.047 END TEST filesystem_xfs 00:11:40.047 ************************************ 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 465886 ']' 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 465886' 00:11:40.047 killing process with pid 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 465886 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:40.047 00:11:40.047 real 0m12.758s 00:11:40.047 user 0m48.729s 00:11:40.047 sys 0m2.028s 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.047 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.047 ************************************ 00:11:40.047 END TEST nvmf_filesystem_no_in_capsule 00:11:40.047 ************************************ 00:11:40.306 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:40.306 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:40.306 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.306 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 ************************************ 00:11:40.307 START TEST nvmf_filesystem_in_capsule 00:11:40.307 ************************************ 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=467575 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 467575 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 467575 ']' 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.307 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 [2024-07-25 09:26:12.882445] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:40.307 [2024-07-25 09:26:12.882525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.307 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.307 [2024-07-25 09:26:12.952385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.565 [2024-07-25 09:26:13.075975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.565 [2024-07-25 09:26:13.076027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.565 [2024-07-25 09:26:13.076043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.565 [2024-07-25 09:26:13.076056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.565 [2024-07-25 09:26:13.076068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.565 [2024-07-25 09:26:13.076162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.565 [2024-07-25 09:26:13.076212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.565 [2024-07-25 09:26:13.076272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.565 [2024-07-25 09:26:13.076275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.128 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.128 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:41.128 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.128 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:41.128 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 [2024-07-25 09:26:13.876132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.387 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 Malloc1 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 [2024-07-25 09:26:14.058472] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:41.387 { 00:11:41.387 "name": "Malloc1", 00:11:41.387 "aliases": [ 00:11:41.387 "742c342f-4f98-4c71-998c-776842dbf6e5" 00:11:41.387 ], 00:11:41.387 "product_name": "Malloc disk", 00:11:41.387 "block_size": 512, 00:11:41.387 "num_blocks": 1048576, 00:11:41.387 "uuid": "742c342f-4f98-4c71-998c-776842dbf6e5", 00:11:41.387 "assigned_rate_limits": { 00:11:41.387 "rw_ios_per_sec": 0, 00:11:41.387 "rw_mbytes_per_sec": 0, 00:11:41.387 "r_mbytes_per_sec": 0, 00:11:41.387 "w_mbytes_per_sec": 0 00:11:41.387 }, 00:11:41.387 "claimed": true, 00:11:41.387 "claim_type": "exclusive_write", 00:11:41.387 "zoned": false, 00:11:41.387 "supported_io_types": { 00:11:41.387 "read": true, 00:11:41.387 "write": true, 00:11:41.387 "unmap": true, 00:11:41.387 "flush": true, 00:11:41.387 "reset": true, 00:11:41.387 "nvme_admin": false, 00:11:41.387 "nvme_io": false, 00:11:41.387 "nvme_io_md": false, 00:11:41.387 "write_zeroes": true, 00:11:41.387 "zcopy": true, 00:11:41.387 "get_zone_info": false, 00:11:41.387 "zone_management": false, 00:11:41.387 "zone_append": false, 00:11:41.387 "compare": false, 00:11:41.387 "compare_and_write": false, 00:11:41.387 "abort": true, 00:11:41.387 "seek_hole": false, 00:11:41.387 "seek_data": false, 00:11:41.387 "copy": true, 00:11:41.387 "nvme_iov_md": false 00:11:41.387 }, 00:11:41.387 "memory_domains": [ 00:11:41.387 { 00:11:41.387 "dma_device_id": "system", 00:11:41.387 "dma_device_type": 1 00:11:41.387 }, 00:11:41.387 { 00:11:41.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.387 "dma_device_type": 2 00:11:41.387 } 00:11:41.387 ], 00:11:41.387 "driver_specific": {} 00:11:41.387 } 00:11:41.387 ]' 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:41.387 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:41.388 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:41.645 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:41.645 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:41.645 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:41.645 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:41.645 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.210 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.210 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:42.210 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.210 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:42.210 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:44.109 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:44.109 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:44.109 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:44.367 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:44.367 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:44.625 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.012 ************************************ 00:11:46.012 START TEST filesystem_in_capsule_ext4 00:11:46.012 ************************************ 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:46.012 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:46.012 mke2fs 1.46.5 (30-Dec-2021) 00:11:46.012 Discarding device blocks: 0/522240 done 00:11:46.012 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:46.012 Filesystem UUID: 79e7655c-1e79-4073-8c6c-44515d2a99c2 00:11:46.012 Superblock backups stored on blocks: 00:11:46.012 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:46.012 00:11:46.012 Allocating group tables: 0/64 done 00:11:46.012 Writing inode tables: 0/64 done 00:11:46.012 Creating journal (8192 blocks): done 00:11:46.933 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:46.933 00:11:46.933 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:46.933 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.500 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.500 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:47.500 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.500 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:47.500 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:47.500 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 467575 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.758 00:11:47.758 real 0m1.917s 00:11:47.758 user 0m0.024s 00:11:47.758 sys 0m0.051s 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:47.758 ************************************ 00:11:47.758 END TEST filesystem_in_capsule_ext4 00:11:47.758 ************************************ 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.758 ************************************ 00:11:47.758 START TEST filesystem_in_capsule_btrfs 00:11:47.758 ************************************ 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:47.758 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.016 btrfs-progs v6.6.2 00:11:48.016 See https://btrfs.readthedocs.io for more information. 00:11:48.016 00:11:48.016 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.016 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.016 this does not affect your deployments: 00:11:48.016 - DUP for metadata (-m dup) 00:11:48.016 - enabled no-holes (-O no-holes) 00:11:48.016 - enabled free-space-tree (-R free-space-tree) 00:11:48.016 00:11:48.016 Label: (null) 00:11:48.016 UUID: adc75a41-f22a-432d-a468-11a42778b74d 00:11:48.016 Node size: 16384 00:11:48.016 Sector size: 4096 00:11:48.016 Filesystem size: 510.00MiB 00:11:48.016 Block group profiles: 00:11:48.016 Data: single 8.00MiB 00:11:48.016 Metadata: DUP 32.00MiB 00:11:48.016 System: DUP 8.00MiB 00:11:48.016 SSD detected: yes 00:11:48.016 Zoned device: no 00:11:48.016 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.016 Runtime features: free-space-tree 00:11:48.016 Checksum: crc32c 00:11:48.016 Number of devices: 1 00:11:48.016 Devices: 00:11:48.016 ID SIZE PATH 00:11:48.016 1 510.00MiB /dev/nvme0n1p1 00:11:48.016 00:11:48.016 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:48.016 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 467575 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.950 00:11:48.950 real 0m1.151s 00:11:48.950 user 0m0.018s 00:11:48.950 sys 0m0.108s 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.950 ************************************ 00:11:48.950 END TEST filesystem_in_capsule_btrfs 00:11:48.950 ************************************ 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.950 ************************************ 00:11:48.950 START TEST filesystem_in_capsule_xfs 00:11:48.950 ************************************ 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:48.950 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:48.950 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:48.950 = sectsz=512 attr=2, projid32bit=1 00:11:48.950 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:48.950 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:48.950 data = bsize=4096 blocks=130560, imaxpct=25 00:11:48.950 = sunit=0 swidth=0 blks 00:11:48.950 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:48.950 log =internal log bsize=4096 blocks=16384, version=2 00:11:48.950 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:48.950 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:49.881 Discarding blocks...Done. 00:11:49.881 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:49.881 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 467575 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.776 00:11:51.776 real 0m2.610s 00:11:51.776 user 0m0.015s 00:11:51.776 sys 0m0.053s 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.776 ************************************ 00:11:51.776 END TEST filesystem_in_capsule_xfs 00:11:51.776 ************************************ 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 467575 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 467575 ']' 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 467575 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467575 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467575' 00:11:51.776 killing process with pid 467575 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 467575 00:11:51.776 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 467575 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:52.341 00:11:52.341 real 0m12.035s 00:11:52.341 user 0m46.231s 00:11:52.341 sys 0m1.760s 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 ************************************ 00:11:52.341 END TEST nvmf_filesystem_in_capsule 00:11:52.341 ************************************ 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.341 rmmod nvme_tcp 00:11:52.341 rmmod nvme_fabrics 00:11:52.341 rmmod nvme_keyring 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.341 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.874 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:54.874 00:11:54.874 real 0m29.444s 00:11:54.874 user 1m35.895s 00:11:54.874 sys 0m5.506s 00:11:54.874 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.874 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.874 ************************************ 00:11:54.874 END TEST nvmf_filesystem 00:11:54.874 ************************************ 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.874 ************************************ 00:11:54.874 START TEST nvmf_target_discovery 00:11:54.874 ************************************ 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:54.874 * Looking for test storage... 00:11:54.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.874 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.776 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.776 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.776 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:56.777 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:56.777 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:56.777 Found net devices under 0000:82:00.0: cvl_0_0 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:56.777 Found net devices under 0000:82:00.1: cvl_0_1 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:56.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:11:56.777 00:11:56.777 --- 10.0.0.2 ping statistics --- 00:11:56.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.777 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:56.777 00:11:56.777 --- 10.0.0.1 ping statistics --- 00:11:56.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.777 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.777 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=471164 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 471164 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 471164 ']' 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.778 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.778 [2024-07-25 09:26:29.360804] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:56.778 [2024-07-25 09:26:29.360911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.778 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.778 [2024-07-25 09:26:29.431067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.036 [2024-07-25 09:26:29.553243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.036 [2024-07-25 09:26:29.553300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.036 [2024-07-25 09:26:29.553325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.036 [2024-07-25 09:26:29.553339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.036 [2024-07-25 09:26:29.553352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.036 [2024-07-25 09:26:29.553449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.036 [2024-07-25 09:26:29.553501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.036 [2024-07-25 09:26:29.553551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.036 [2024-07-25 09:26:29.553554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 [2024-07-25 09:26:30.378861] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 Null1 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:57.968 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 [2024-07-25 09:26:30.427140] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 Null2 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 Null3 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 Null4 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.969 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:11:58.228 00:11:58.228 Discovery Log Number of Records 6, Generation counter 6 00:11:58.228 =====Discovery Log Entry 0====== 00:11:58.228 trtype: tcp 00:11:58.228 adrfam: ipv4 00:11:58.228 subtype: current discovery subsystem 00:11:58.228 treq: not required 00:11:58.228 portid: 0 00:11:58.228 trsvcid: 4420 00:11:58.228 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.228 traddr: 10.0.0.2 00:11:58.228 eflags: explicit discovery connections, duplicate discovery information 00:11:58.228 sectype: none 00:11:58.228 =====Discovery Log Entry 1====== 00:11:58.228 trtype: tcp 00:11:58.228 adrfam: ipv4 00:11:58.228 subtype: nvme subsystem 00:11:58.228 treq: not required 00:11:58.228 portid: 0 00:11:58.228 trsvcid: 4420 00:11:58.228 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.228 traddr: 10.0.0.2 00:11:58.228 eflags: none 00:11:58.228 sectype: none 00:11:58.228 =====Discovery Log Entry 2====== 00:11:58.228 trtype: tcp 00:11:58.228 adrfam: ipv4 00:11:58.228 subtype: nvme subsystem 00:11:58.228 treq: not required 00:11:58.228 portid: 0 00:11:58.228 trsvcid: 4420 00:11:58.228 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:58.228 traddr: 10.0.0.2 00:11:58.228 eflags: none 00:11:58.228 sectype: none 00:11:58.228 =====Discovery Log Entry 3====== 00:11:58.228 trtype: tcp 00:11:58.228 adrfam: ipv4 00:11:58.228 subtype: nvme subsystem 00:11:58.228 treq: not required 00:11:58.228 portid: 0 00:11:58.228 trsvcid: 4420 00:11:58.228 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:58.228 traddr: 10.0.0.2 00:11:58.228 eflags: none 00:11:58.228 sectype: none 00:11:58.228 =====Discovery Log Entry 4====== 00:11:58.228 trtype: tcp 00:11:58.228 adrfam: ipv4 00:11:58.228 subtype: nvme subsystem 00:11:58.228 treq: not required 00:11:58.228 portid: 0 00:11:58.228 trsvcid: 4420 00:11:58.228 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:58.228 traddr: 10.0.0.2 00:11:58.228 eflags: none 00:11:58.228 sectype: none 00:11:58.228 =====Discovery Log Entry 5====== 00:11:58.228 trtype: tcp 00:11:58.228 adrfam: ipv4 00:11:58.228 subtype: discovery subsystem referral 00:11:58.228 treq: not required 00:11:58.228 portid: 0 00:11:58.228 trsvcid: 4430 00:11:58.228 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.228 traddr: 10.0.0.2 00:11:58.228 eflags: none 00:11:58.228 sectype: none 00:11:58.228 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:58.228 Perform nvmf subsystem discovery via RPC 00:11:58.228 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:58.228 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.228 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.228 [ 00:11:58.228 { 00:11:58.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.228 "subtype": "Discovery", 00:11:58.228 "listen_addresses": [ 00:11:58.228 { 00:11:58.228 "trtype": "TCP", 00:11:58.228 "adrfam": "IPv4", 00:11:58.228 "traddr": "10.0.0.2", 00:11:58.228 "trsvcid": "4420" 00:11:58.228 } 00:11:58.228 ], 00:11:58.228 "allow_any_host": true, 00:11:58.228 "hosts": [] 00:11:58.228 }, 00:11:58.228 { 00:11:58.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.228 "subtype": "NVMe", 00:11:58.228 "listen_addresses": [ 00:11:58.228 { 00:11:58.228 "trtype": "TCP", 00:11:58.228 "adrfam": "IPv4", 00:11:58.228 "traddr": "10.0.0.2", 00:11:58.228 "trsvcid": "4420" 00:11:58.228 } 00:11:58.228 ], 00:11:58.228 "allow_any_host": true, 00:11:58.228 "hosts": [], 00:11:58.228 "serial_number": "SPDK00000000000001", 00:11:58.228 "model_number": "SPDK bdev Controller", 00:11:58.228 "max_namespaces": 32, 00:11:58.228 "min_cntlid": 1, 00:11:58.228 "max_cntlid": 65519, 00:11:58.228 "namespaces": [ 00:11:58.228 { 00:11:58.228 "nsid": 1, 00:11:58.228 "bdev_name": "Null1", 00:11:58.228 "name": "Null1", 00:11:58.228 "nguid": "A8BD31CA14D341439723C45136197EE1", 00:11:58.228 "uuid": "a8bd31ca-14d3-4143-9723-c45136197ee1" 00:11:58.228 } 00:11:58.228 ] 00:11:58.228 }, 00:11:58.228 { 00:11:58.228 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:58.228 "subtype": "NVMe", 00:11:58.228 "listen_addresses": [ 00:11:58.228 { 00:11:58.228 "trtype": "TCP", 00:11:58.228 "adrfam": "IPv4", 00:11:58.228 "traddr": "10.0.0.2", 00:11:58.228 "trsvcid": "4420" 00:11:58.228 } 00:11:58.228 ], 00:11:58.228 "allow_any_host": true, 00:11:58.228 "hosts": [], 00:11:58.228 "serial_number": "SPDK00000000000002", 00:11:58.228 "model_number": "SPDK bdev Controller", 00:11:58.228 "max_namespaces": 32, 00:11:58.228 "min_cntlid": 1, 00:11:58.228 "max_cntlid": 65519, 00:11:58.228 "namespaces": [ 00:11:58.228 { 00:11:58.228 "nsid": 1, 00:11:58.228 "bdev_name": "Null2", 00:11:58.228 "name": "Null2", 00:11:58.228 "nguid": "BA8C69C240444383A96918B73DC374F7", 00:11:58.228 "uuid": "ba8c69c2-4044-4383-a969-18b73dc374f7" 00:11:58.228 } 00:11:58.228 ] 00:11:58.228 }, 00:11:58.228 { 00:11:58.228 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:58.228 "subtype": "NVMe", 00:11:58.228 "listen_addresses": [ 00:11:58.228 { 00:11:58.228 "trtype": "TCP", 00:11:58.228 "adrfam": "IPv4", 00:11:58.228 "traddr": "10.0.0.2", 00:11:58.228 "trsvcid": "4420" 00:11:58.228 } 00:11:58.228 ], 00:11:58.228 "allow_any_host": true, 00:11:58.228 "hosts": [], 00:11:58.228 "serial_number": "SPDK00000000000003", 00:11:58.228 "model_number": "SPDK bdev Controller", 00:11:58.228 "max_namespaces": 32, 00:11:58.228 "min_cntlid": 1, 00:11:58.228 "max_cntlid": 65519, 00:11:58.228 "namespaces": [ 00:11:58.228 { 00:11:58.228 "nsid": 1, 00:11:58.228 "bdev_name": "Null3", 00:11:58.228 "name": "Null3", 00:11:58.228 "nguid": "6065DD7255154A858445916548A36FCC", 00:11:58.228 "uuid": "6065dd72-5515-4a85-8445-916548a36fcc" 00:11:58.228 } 00:11:58.228 ] 00:11:58.228 }, 00:11:58.228 { 00:11:58.228 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:58.228 "subtype": "NVMe", 00:11:58.228 "listen_addresses": [ 00:11:58.228 { 00:11:58.228 "trtype": "TCP", 00:11:58.228 "adrfam": "IPv4", 00:11:58.228 "traddr": "10.0.0.2", 00:11:58.228 "trsvcid": "4420" 00:11:58.228 } 00:11:58.228 ], 00:11:58.228 "allow_any_host": true, 00:11:58.228 "hosts": [], 00:11:58.228 "serial_number": "SPDK00000000000004", 00:11:58.228 "model_number": "SPDK bdev Controller", 00:11:58.228 "max_namespaces": 32, 00:11:58.228 "min_cntlid": 1, 00:11:58.228 "max_cntlid": 65519, 00:11:58.228 "namespaces": [ 00:11:58.228 { 00:11:58.229 "nsid": 1, 00:11:58.229 "bdev_name": "Null4", 00:11:58.229 "name": "Null4", 00:11:58.229 "nguid": "36EB1BEEBE5F424F8505E450E780E522", 00:11:58.229 "uuid": "36eb1bee-be5f-424f-8505-e450e780e522" 00:11:58.229 } 00:11:58.229 ] 00:11:58.229 } 00:11:58.229 ] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.229 rmmod nvme_tcp 00:11:58.229 rmmod nvme_fabrics 00:11:58.229 rmmod nvme_keyring 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 471164 ']' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 471164 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 471164 ']' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 471164 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 471164 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 471164' 00:11:58.229 killing process with pid 471164 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 471164 00:11:58.229 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 471164 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.487 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.019 00:12:01.019 real 0m6.183s 00:12:01.019 user 0m7.377s 00:12:01.019 sys 0m1.915s 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.019 ************************************ 00:12:01.019 END TEST nvmf_target_discovery 00:12:01.019 ************************************ 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.019 ************************************ 00:12:01.019 START TEST nvmf_referrals 00:12:01.019 ************************************ 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:01.019 * Looking for test storage... 00:12:01.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.019 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.020 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:02.921 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:02.921 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:02.921 Found net devices under 0000:82:00.0: cvl_0_0 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:02.921 Found net devices under 0000:82:00.1: cvl_0_1 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.921 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:02.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:12:02.922 00:12:02.922 --- 10.0.0.2 ping statistics --- 00:12:02.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.922 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:12:02.922 00:12:02.922 --- 10.0.0.1 ping statistics --- 00:12:02.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.922 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=473264 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 473264 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 473264 ']' 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.922 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.922 [2024-07-25 09:26:35.548770] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:02.922 [2024-07-25 09:26:35.548839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.922 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.922 [2024-07-25 09:26:35.613079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.181 [2024-07-25 09:26:35.726954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.181 [2024-07-25 09:26:35.727008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.181 [2024-07-25 09:26:35.727036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.181 [2024-07-25 09:26:35.727047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.181 [2024-07-25 09:26:35.727057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.181 [2024-07-25 09:26:35.727139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.181 [2024-07-25 09:26:35.727204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.181 [2024-07-25 09:26:35.727271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.181 [2024-07-25 09:26:35.727273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.181 [2024-07-25 09:26:35.885743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.181 [2024-07-25 09:26:35.897969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.181 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:03.439 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:03.439 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:03.697 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.955 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.213 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.471 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.729 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.729 rmmod nvme_tcp 00:12:04.729 rmmod nvme_fabrics 00:12:04.729 rmmod nvme_keyring 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 473264 ']' 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 473264 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 473264 ']' 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 473264 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473264 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473264' 00:12:04.987 killing process with pid 473264 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 473264 00:12:04.987 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 473264 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.246 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.157 00:12:07.157 real 0m6.524s 00:12:07.157 user 0m9.360s 00:12:07.157 sys 0m2.078s 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.157 ************************************ 00:12:07.157 END TEST nvmf_referrals 00:12:07.157 ************************************ 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.157 ************************************ 00:12:07.157 START TEST nvmf_connect_disconnect 00:12:07.157 ************************************ 00:12:07.157 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:07.419 * Looking for test storage... 00:12:07.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.419 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:09.321 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:09.321 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.321 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:09.322 Found net devices under 0000:82:00.0: cvl_0_0 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:09.322 Found net devices under 0000:82:00.1: cvl_0_1 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.322 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.322 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:09.322 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.322 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.322 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:09.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:09.580 00:12:09.580 --- 10.0.0.2 ping statistics --- 00:12:09.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.580 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:09.580 00:12:09.580 --- 10.0.0.1 ping statistics --- 00:12:09.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.580 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=475552 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 475552 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 475552 ']' 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.580 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.581 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.581 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.581 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.581 [2024-07-25 09:26:42.147829] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:09.581 [2024-07-25 09:26:42.147925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.581 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.581 [2024-07-25 09:26:42.215324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.838 [2024-07-25 09:26:42.328228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.838 [2024-07-25 09:26:42.328272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.838 [2024-07-25 09:26:42.328301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.838 [2024-07-25 09:26:42.328314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.838 [2024-07-25 09:26:42.328324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.838 [2024-07-25 09:26:42.328399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.838 [2024-07-25 09:26:42.328457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.838 [2024-07-25 09:26:42.328506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.838 [2024-07-25 09:26:42.328509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.838 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.838 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:12:09.838 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.838 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 [2024-07-25 09:26:42.484565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 [2024-07-25 09:26:42.535562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:09.839 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:13.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.993 rmmod nvme_tcp 00:12:23.993 rmmod nvme_fabrics 00:12:23.993 rmmod nvme_keyring 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 475552 ']' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 475552 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 475552 ']' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 475552 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 475552 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 475552' 00:12:23.993 killing process with pid 475552 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 475552 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 475552 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.993 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.528 00:12:26.528 real 0m18.912s 00:12:26.528 user 0m56.820s 00:12:26.528 sys 0m3.417s 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.528 ************************************ 00:12:26.528 END TEST nvmf_connect_disconnect 00:12:26.528 ************************************ 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.528 ************************************ 00:12:26.528 START TEST nvmf_multitarget 00:12:26.528 ************************************ 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.528 * Looking for test storage... 00:12:26.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:26.528 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.529 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:28.430 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:28.430 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.430 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:28.431 Found net devices under 0000:82:00.0: cvl_0_0 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:28.431 Found net devices under 0000:82:00.1: cvl_0_1 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.431 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:12:28.431 00:12:28.431 --- 10.0.0.2 ping statistics --- 00:12:28.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.431 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:12:28.431 00:12:28.431 --- 10.0.0.1 ping statistics --- 00:12:28.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.431 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=479377 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 479377 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 479377 ']' 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.431 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.431 [2024-07-25 09:27:01.143917] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:28.431 [2024-07-25 09:27:01.143995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.690 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.690 [2024-07-25 09:27:01.213561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.690 [2024-07-25 09:27:01.334812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.690 [2024-07-25 09:27:01.334872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.690 [2024-07-25 09:27:01.334889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.690 [2024-07-25 09:27:01.334903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.690 [2024-07-25 09:27:01.334915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.690 [2024-07-25 09:27:01.338380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.690 [2024-07-25 09:27:01.338448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.690 [2024-07-25 09:27:01.338469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.690 [2024-07-25 09:27:01.338473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:28.948 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:29.205 "nvmf_tgt_1" 00:12:29.205 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:29.205 "nvmf_tgt_2" 00:12:29.205 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.205 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:29.463 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:29.463 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:29.463 true 00:12:29.463 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:29.463 true 00:12:29.463 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.463 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.720 rmmod nvme_tcp 00:12:29.720 rmmod nvme_fabrics 00:12:29.720 rmmod nvme_keyring 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.720 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 479377 ']' 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 479377 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 479377 ']' 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 479377 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 479377 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 479377' 00:12:29.721 killing process with pid 479377 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 479377 00:12:29.721 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 479377 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.980 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.508 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.508 00:12:32.508 real 0m5.846s 00:12:32.508 user 0m6.572s 00:12:32.508 sys 0m1.920s 00:12:32.508 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.508 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.508 ************************************ 00:12:32.508 END TEST nvmf_multitarget 00:12:32.508 ************************************ 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.509 ************************************ 00:12:32.509 START TEST nvmf_rpc 00:12:32.509 ************************************ 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.509 * Looking for test storage... 00:12:32.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.509 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:34.408 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:34.408 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:34.408 Found net devices under 0000:82:00.0: cvl_0_0 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:34.408 Found net devices under 0000:82:00.1: cvl_0_1 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:12:34.408 00:12:34.408 --- 10.0.0.2 ping statistics --- 00:12:34.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.408 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:12:34.408 00:12:34.408 --- 10.0.0.1 ping statistics --- 00:12:34.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.408 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.408 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=481523 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 481523 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 481523 ']' 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.408 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.408 [2024-07-25 09:27:07.062066] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:34.408 [2024-07-25 09:27:07.062155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.408 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.408 [2024-07-25 09:27:07.136372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.666 [2024-07-25 09:27:07.257455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.666 [2024-07-25 09:27:07.257511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.666 [2024-07-25 09:27:07.257528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.666 [2024-07-25 09:27:07.257542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.666 [2024-07-25 09:27:07.257554] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.666 [2024-07-25 09:27:07.257617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.666 [2024-07-25 09:27:07.257671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.666 [2024-07-25 09:27:07.257704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.666 [2024-07-25 09:27:07.257707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.597 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:35.598 "tick_rate": 2700000000, 00:12:35.598 "poll_groups": [ 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_000", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [] 00:12:35.598 }, 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_001", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [] 00:12:35.598 }, 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_002", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [] 00:12:35.598 }, 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_003", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [] 00:12:35.598 } 00:12:35.598 ] 00:12:35.598 }' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 [2024-07-25 09:27:08.110272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:35.598 "tick_rate": 2700000000, 00:12:35.598 "poll_groups": [ 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_000", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [ 00:12:35.598 { 00:12:35.598 "trtype": "TCP" 00:12:35.598 } 00:12:35.598 ] 00:12:35.598 }, 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_001", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [ 00:12:35.598 { 00:12:35.598 "trtype": "TCP" 00:12:35.598 } 00:12:35.598 ] 00:12:35.598 }, 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_002", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [ 00:12:35.598 { 00:12:35.598 "trtype": "TCP" 00:12:35.598 } 00:12:35.598 ] 00:12:35.598 }, 00:12:35.598 { 00:12:35.598 "name": "nvmf_tgt_poll_group_003", 00:12:35.598 "admin_qpairs": 0, 00:12:35.598 "io_qpairs": 0, 00:12:35.598 "current_admin_qpairs": 0, 00:12:35.598 "current_io_qpairs": 0, 00:12:35.598 "pending_bdev_io": 0, 00:12:35.598 "completed_nvme_io": 0, 00:12:35.598 "transports": [ 00:12:35.598 { 00:12:35.598 "trtype": "TCP" 00:12:35.598 } 00:12:35.598 ] 00:12:35.598 } 00:12:35.598 ] 00:12:35.598 }' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 Malloc1 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 [2024-07-25 09:27:08.257693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:35.598 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:12:35.599 [2024-07-25 09:27:08.280173] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:12:35.599 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.599 could not add new controller: failed to write to nvme-fabrics device 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.599 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.529 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.529 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:36.529 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.529 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:36.529 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:38.425 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:38.426 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:38.426 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.426 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.426 [2024-07-25 09:27:11.140977] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:12:38.684 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.684 could not add new controller: failed to write to nvme-fabrics device 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.684 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.261 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.261 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:39.261 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.261 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:39.261 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:41.159 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 [2024-07-25 09:27:13.921222] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.418 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.007 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.007 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:42.007 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.007 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:42.007 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:43.906 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.164 [2024-07-25 09:27:16.816676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.164 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.097 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.097 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:45.097 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.097 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:45.097 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.994 [2024-07-25 09:27:19.630582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.994 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.928 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.928 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:47.928 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.928 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:47.928 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.827 [2024-07-25 09:27:22.445299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.827 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.393 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.393 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:50.393 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.393 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:50.393 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 [2024-07-25 09:27:25.202267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.177 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.177 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:53.177 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.177 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:53.177 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 [2024-07-25 09:27:27.972870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 [2024-07-25 09:27:28.020942] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.704 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 [2024-07-25 09:27:28.069086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 [2024-07-25 09:27:28.117238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 [2024-07-25 09:27:28.165451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.705 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:55.705 "tick_rate": 2700000000, 00:12:55.705 "poll_groups": [ 00:12:55.705 { 00:12:55.705 "name": "nvmf_tgt_poll_group_000", 00:12:55.705 "admin_qpairs": 2, 00:12:55.705 "io_qpairs": 84, 00:12:55.705 "current_admin_qpairs": 0, 00:12:55.705 "current_io_qpairs": 0, 00:12:55.705 "pending_bdev_io": 0, 00:12:55.705 "completed_nvme_io": 190, 00:12:55.705 "transports": [ 00:12:55.705 { 00:12:55.705 "trtype": "TCP" 00:12:55.705 } 00:12:55.705 ] 00:12:55.705 }, 00:12:55.705 { 00:12:55.705 "name": "nvmf_tgt_poll_group_001", 00:12:55.705 "admin_qpairs": 2, 00:12:55.705 "io_qpairs": 84, 00:12:55.705 "current_admin_qpairs": 0, 00:12:55.705 "current_io_qpairs": 0, 00:12:55.705 "pending_bdev_io": 0, 00:12:55.705 "completed_nvme_io": 175, 00:12:55.705 "transports": [ 00:12:55.705 { 00:12:55.705 "trtype": "TCP" 00:12:55.705 } 00:12:55.705 ] 00:12:55.705 }, 00:12:55.705 { 00:12:55.705 "name": "nvmf_tgt_poll_group_002", 00:12:55.705 "admin_qpairs": 1, 00:12:55.705 "io_qpairs": 84, 00:12:55.705 "current_admin_qpairs": 0, 00:12:55.705 "current_io_qpairs": 0, 00:12:55.705 "pending_bdev_io": 0, 00:12:55.705 "completed_nvme_io": 184, 00:12:55.705 "transports": [ 00:12:55.705 { 00:12:55.705 "trtype": "TCP" 00:12:55.706 } 00:12:55.706 ] 00:12:55.706 }, 00:12:55.706 { 00:12:55.706 "name": "nvmf_tgt_poll_group_003", 00:12:55.706 "admin_qpairs": 2, 00:12:55.706 "io_qpairs": 84, 00:12:55.706 "current_admin_qpairs": 0, 00:12:55.706 "current_io_qpairs": 0, 00:12:55.706 "pending_bdev_io": 0, 00:12:55.706 "completed_nvme_io": 137, 00:12:55.706 "transports": [ 00:12:55.706 { 00:12:55.706 "trtype": "TCP" 00:12:55.706 } 00:12:55.706 ] 00:12:55.706 } 00:12:55.706 ] 00:12:55.706 }' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.706 rmmod nvme_tcp 00:12:55.706 rmmod nvme_fabrics 00:12:55.706 rmmod nvme_keyring 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 481523 ']' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 481523 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 481523 ']' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 481523 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 481523 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 481523' 00:12:55.706 killing process with pid 481523 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 481523 00:12:55.706 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 481523 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.271 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.175 00:12:58.175 real 0m26.028s 00:12:58.175 user 1m24.971s 00:12:58.175 sys 0m4.203s 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.175 ************************************ 00:12:58.175 END TEST nvmf_rpc 00:12:58.175 ************************************ 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.175 ************************************ 00:12:58.175 START TEST nvmf_invalid 00:12:58.175 ************************************ 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:58.175 * Looking for test storage... 00:12:58.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.175 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.176 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:00.073 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:00.073 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.073 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:00.074 Found net devices under 0000:82:00.0: cvl_0_0 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:00.074 Found net devices under 0000:82:00.1: cvl_0_1 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.074 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:13:00.332 00:13:00.332 --- 10.0.0.2 ping statistics --- 00:13:00.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.332 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:00.332 00:13:00.332 --- 10.0.0.1 ping statistics --- 00:13:00.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.332 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=486634 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 486634 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 486634 ']' 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.332 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.332 [2024-07-25 09:27:33.000624] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:00.332 [2024-07-25 09:27:33.000716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.332 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.590 [2024-07-25 09:27:33.069176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.590 [2024-07-25 09:27:33.193077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.590 [2024-07-25 09:27:33.193136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.590 [2024-07-25 09:27:33.193153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.590 [2024-07-25 09:27:33.193167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.590 [2024-07-25 09:27:33.193178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.590 [2024-07-25 09:27:33.193237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.590 [2024-07-25 09:27:33.196381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.590 [2024-07-25 09:27:33.196428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.590 [2024-07-25 09:27:33.196433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.524 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.524 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:01.524 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.524 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.524 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:01.524 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.524 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:01.524 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5229 00:13:01.781 [2024-07-25 09:27:34.291964] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:01.781 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:01.781 { 00:13:01.781 "nqn": "nqn.2016-06.io.spdk:cnode5229", 00:13:01.781 "tgt_name": "foobar", 00:13:01.782 "method": "nvmf_create_subsystem", 00:13:01.782 "req_id": 1 00:13:01.782 } 00:13:01.782 Got JSON-RPC error response 00:13:01.782 response: 00:13:01.782 { 00:13:01.782 "code": -32603, 00:13:01.782 "message": "Unable to find target foobar" 00:13:01.782 }' 00:13:01.782 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:01.782 { 00:13:01.782 "nqn": "nqn.2016-06.io.spdk:cnode5229", 00:13:01.782 "tgt_name": "foobar", 00:13:01.782 "method": "nvmf_create_subsystem", 00:13:01.782 "req_id": 1 00:13:01.782 } 00:13:01.782 Got JSON-RPC error response 00:13:01.782 response: 00:13:01.782 { 00:13:01.782 "code": -32603, 00:13:01.782 "message": "Unable to find target foobar" 00:13:01.782 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:01.782 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:01.782 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6429 00:13:02.039 [2024-07-25 09:27:34.564842] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6429: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:02.039 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:02.039 { 00:13:02.040 "nqn": "nqn.2016-06.io.spdk:cnode6429", 00:13:02.040 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:02.040 "method": "nvmf_create_subsystem", 00:13:02.040 "req_id": 1 00:13:02.040 } 00:13:02.040 Got JSON-RPC error response 00:13:02.040 response: 00:13:02.040 { 00:13:02.040 "code": -32602, 00:13:02.040 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:02.040 }' 00:13:02.040 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:02.040 { 00:13:02.040 "nqn": "nqn.2016-06.io.spdk:cnode6429", 00:13:02.040 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:02.040 "method": "nvmf_create_subsystem", 00:13:02.040 "req_id": 1 00:13:02.040 } 00:13:02.040 Got JSON-RPC error response 00:13:02.040 response: 00:13:02.040 { 00:13:02.040 "code": -32602, 00:13:02.040 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:02.040 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:02.040 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:02.040 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29509 00:13:02.298 [2024-07-25 09:27:34.809626] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29509: invalid model number 'SPDK_Controller' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:02.298 { 00:13:02.298 "nqn": "nqn.2016-06.io.spdk:cnode29509", 00:13:02.298 "model_number": "SPDK_Controller\u001f", 00:13:02.298 "method": "nvmf_create_subsystem", 00:13:02.298 "req_id": 1 00:13:02.298 } 00:13:02.298 Got JSON-RPC error response 00:13:02.298 response: 00:13:02.298 { 00:13:02.298 "code": -32602, 00:13:02.298 "message": "Invalid MN SPDK_Controller\u001f" 00:13:02.298 }' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:02.298 { 00:13:02.298 "nqn": "nqn.2016-06.io.spdk:cnode29509", 00:13:02.298 "model_number": "SPDK_Controller\u001f", 00:13:02.298 "method": "nvmf_create_subsystem", 00:13:02.298 "req_id": 1 00:13:02.298 } 00:13:02.298 Got JSON-RPC error response 00:13:02.298 response: 00:13:02.298 { 00:13:02.298 "code": -32602, 00:13:02.298 "message": "Invalid MN SPDK_Controller\u001f" 00:13:02.298 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:02.298 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''X-]=)p%#},4&N'\''I&u$fF' 00:13:02.299 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''X-]=)p%#},4&N'\''I&u$fF' nqn.2016-06.io.spdk:cnode31856 00:13:02.558 [2024-07-25 09:27:35.130705] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31856: invalid serial number ''X-]=)p%#},4&N'I&u$fF' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:02.558 { 00:13:02.558 "nqn": "nqn.2016-06.io.spdk:cnode31856", 00:13:02.558 "serial_number": "'\''X-]=)p%#},4&N'\''I&u$fF", 00:13:02.558 "method": "nvmf_create_subsystem", 00:13:02.558 "req_id": 1 00:13:02.558 } 00:13:02.558 Got JSON-RPC error response 00:13:02.558 response: 00:13:02.558 { 00:13:02.558 "code": -32602, 00:13:02.558 "message": "Invalid SN '\''X-]=)p%#},4&N'\''I&u$fF" 00:13:02.558 }' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:02.558 { 00:13:02.558 "nqn": "nqn.2016-06.io.spdk:cnode31856", 00:13:02.558 "serial_number": "'X-]=)p%#},4&N'I&u$fF", 00:13:02.558 "method": "nvmf_create_subsystem", 00:13:02.558 "req_id": 1 00:13:02.558 } 00:13:02.558 Got JSON-RPC error response 00:13:02.558 response: 00:13:02.558 { 00:13:02.558 "code": -32602, 00:13:02.558 "message": "Invalid SN 'X-]=)p%#},4&N'I&u$fF" 00:13:02.558 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:02.558 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:02.559 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.560 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>MOZ#U,a58 s= 9.}!W6wQlhkyf<1M3RO;Q:&o0U' 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>MOZ#U,a58 s= 9.}!W6wQlhkyf<1M3RO;Q:&o0U' nqn.2016-06.io.spdk:cnode6116 00:13:02.817 [2024-07-25 09:27:35.519988] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6116: invalid model number '>MOZ#U,a58 s= 9.}!W6wQlhkyf<1M3RO;Q:&o0U' 00:13:02.817 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:02.817 { 00:13:02.817 "nqn": "nqn.2016-06.io.spdk:cnode6116", 00:13:02.817 "model_number": ">MOZ#U,a58 s= 9.}!W6wQlhky\u007ff<1M3RO;Q:&o0U", 00:13:02.817 "method": "nvmf_create_subsystem", 00:13:02.817 "req_id": 1 00:13:02.817 } 00:13:02.818 Got JSON-RPC error response 00:13:02.818 response: 00:13:02.818 { 00:13:02.818 "code": -32602, 00:13:02.818 "message": "Invalid MN >MOZ#U,a58 s= 9.}!W6wQlhky\u007ff<1M3RO;Q:&o0U" 00:13:02.818 }' 00:13:02.818 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:02.818 { 00:13:02.818 "nqn": "nqn.2016-06.io.spdk:cnode6116", 00:13:02.818 "model_number": ">MOZ#U,a58 s= 9.}!W6wQlhky\u007ff<1M3RO;Q:&o0U", 00:13:02.818 "method": "nvmf_create_subsystem", 00:13:02.818 "req_id": 1 00:13:02.818 } 00:13:02.818 Got JSON-RPC error response 00:13:02.818 response: 00:13:02.818 { 00:13:02.818 "code": -32602, 00:13:02.818 "message": "Invalid MN >MOZ#U,a58 s= 9.}!W6wQlhky\u007ff<1M3RO;Q:&o0U" 00:13:02.818 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.818 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:03.075 [2024-07-25 09:27:35.764887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.075 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:03.332 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:03.332 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:03.332 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:03.332 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:03.332 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:03.590 [2024-07-25 09:27:36.278602] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:03.590 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:03.590 { 00:13:03.590 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.590 "listen_address": { 00:13:03.590 "trtype": "tcp", 00:13:03.590 "traddr": "", 00:13:03.590 "trsvcid": "4421" 00:13:03.590 }, 00:13:03.590 "method": "nvmf_subsystem_remove_listener", 00:13:03.590 "req_id": 1 00:13:03.590 } 00:13:03.590 Got JSON-RPC error response 00:13:03.590 response: 00:13:03.590 { 00:13:03.590 "code": -32602, 00:13:03.590 "message": "Invalid parameters" 00:13:03.590 }' 00:13:03.590 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:03.590 { 00:13:03.590 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.590 "listen_address": { 00:13:03.590 "trtype": "tcp", 00:13:03.590 "traddr": "", 00:13:03.590 "trsvcid": "4421" 00:13:03.590 }, 00:13:03.590 "method": "nvmf_subsystem_remove_listener", 00:13:03.590 "req_id": 1 00:13:03.590 } 00:13:03.590 Got JSON-RPC error response 00:13:03.590 response: 00:13:03.590 { 00:13:03.590 "code": -32602, 00:13:03.590 "message": "Invalid parameters" 00:13:03.590 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:03.590 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15261 -i 0 00:13:03.847 [2024-07-25 09:27:36.527380] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15261: invalid cntlid range [0-65519] 00:13:03.847 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:03.847 { 00:13:03.847 "nqn": "nqn.2016-06.io.spdk:cnode15261", 00:13:03.847 "min_cntlid": 0, 00:13:03.847 "method": "nvmf_create_subsystem", 00:13:03.847 "req_id": 1 00:13:03.847 } 00:13:03.847 Got JSON-RPC error response 00:13:03.847 response: 00:13:03.847 { 00:13:03.847 "code": -32602, 00:13:03.847 "message": "Invalid cntlid range [0-65519]" 00:13:03.847 }' 00:13:03.847 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:03.847 { 00:13:03.847 "nqn": "nqn.2016-06.io.spdk:cnode15261", 00:13:03.847 "min_cntlid": 0, 00:13:03.847 "method": "nvmf_create_subsystem", 00:13:03.847 "req_id": 1 00:13:03.847 } 00:13:03.847 Got JSON-RPC error response 00:13:03.847 response: 00:13:03.847 { 00:13:03.847 "code": -32602, 00:13:03.847 "message": "Invalid cntlid range [0-65519]" 00:13:03.847 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.847 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14827 -i 65520 00:13:04.105 [2024-07-25 09:27:36.776208] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14827: invalid cntlid range [65520-65519] 00:13:04.105 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:04.105 { 00:13:04.105 "nqn": "nqn.2016-06.io.spdk:cnode14827", 00:13:04.105 "min_cntlid": 65520, 00:13:04.105 "method": "nvmf_create_subsystem", 00:13:04.105 "req_id": 1 00:13:04.105 } 00:13:04.105 Got JSON-RPC error response 00:13:04.105 response: 00:13:04.105 { 00:13:04.105 "code": -32602, 00:13:04.105 "message": "Invalid cntlid range [65520-65519]" 00:13:04.105 }' 00:13:04.105 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:04.105 { 00:13:04.105 "nqn": "nqn.2016-06.io.spdk:cnode14827", 00:13:04.105 "min_cntlid": 65520, 00:13:04.105 "method": "nvmf_create_subsystem", 00:13:04.105 "req_id": 1 00:13:04.105 } 00:13:04.105 Got JSON-RPC error response 00:13:04.105 response: 00:13:04.105 { 00:13:04.105 "code": -32602, 00:13:04.105 "message": "Invalid cntlid range [65520-65519]" 00:13:04.105 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.105 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19435 -I 0 00:13:04.362 [2024-07-25 09:27:37.012982] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19435: invalid cntlid range [1-0] 00:13:04.362 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:04.362 { 00:13:04.362 "nqn": "nqn.2016-06.io.spdk:cnode19435", 00:13:04.362 "max_cntlid": 0, 00:13:04.362 "method": "nvmf_create_subsystem", 00:13:04.362 "req_id": 1 00:13:04.362 } 00:13:04.362 Got JSON-RPC error response 00:13:04.362 response: 00:13:04.362 { 00:13:04.362 "code": -32602, 00:13:04.362 "message": "Invalid cntlid range [1-0]" 00:13:04.362 }' 00:13:04.362 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:04.362 { 00:13:04.362 "nqn": "nqn.2016-06.io.spdk:cnode19435", 00:13:04.362 "max_cntlid": 0, 00:13:04.362 "method": "nvmf_create_subsystem", 00:13:04.362 "req_id": 1 00:13:04.362 } 00:13:04.362 Got JSON-RPC error response 00:13:04.362 response: 00:13:04.362 { 00:13:04.362 "code": -32602, 00:13:04.362 "message": "Invalid cntlid range [1-0]" 00:13:04.362 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.363 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16037 -I 65520 00:13:04.620 [2024-07-25 09:27:37.277889] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16037: invalid cntlid range [1-65520] 00:13:04.620 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:04.620 { 00:13:04.620 "nqn": "nqn.2016-06.io.spdk:cnode16037", 00:13:04.620 "max_cntlid": 65520, 00:13:04.620 "method": "nvmf_create_subsystem", 00:13:04.620 "req_id": 1 00:13:04.620 } 00:13:04.620 Got JSON-RPC error response 00:13:04.620 response: 00:13:04.620 { 00:13:04.620 "code": -32602, 00:13:04.620 "message": "Invalid cntlid range [1-65520]" 00:13:04.620 }' 00:13:04.620 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:04.620 { 00:13:04.620 "nqn": "nqn.2016-06.io.spdk:cnode16037", 00:13:04.620 "max_cntlid": 65520, 00:13:04.620 "method": "nvmf_create_subsystem", 00:13:04.620 "req_id": 1 00:13:04.620 } 00:13:04.620 Got JSON-RPC error response 00:13:04.620 response: 00:13:04.620 { 00:13:04.620 "code": -32602, 00:13:04.620 "message": "Invalid cntlid range [1-65520]" 00:13:04.620 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.620 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32560 -i 6 -I 5 00:13:04.877 [2024-07-25 09:27:37.522728] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32560: invalid cntlid range [6-5] 00:13:04.877 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:04.877 { 00:13:04.877 "nqn": "nqn.2016-06.io.spdk:cnode32560", 00:13:04.877 "min_cntlid": 6, 00:13:04.877 "max_cntlid": 5, 00:13:04.877 "method": "nvmf_create_subsystem", 00:13:04.877 "req_id": 1 00:13:04.877 } 00:13:04.877 Got JSON-RPC error response 00:13:04.877 response: 00:13:04.877 { 00:13:04.877 "code": -32602, 00:13:04.877 "message": "Invalid cntlid range [6-5]" 00:13:04.877 }' 00:13:04.877 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:04.877 { 00:13:04.877 "nqn": "nqn.2016-06.io.spdk:cnode32560", 00:13:04.877 "min_cntlid": 6, 00:13:04.877 "max_cntlid": 5, 00:13:04.877 "method": "nvmf_create_subsystem", 00:13:04.877 "req_id": 1 00:13:04.877 } 00:13:04.877 Got JSON-RPC error response 00:13:04.877 response: 00:13:04.877 { 00:13:04.877 "code": -32602, 00:13:04.877 "message": "Invalid cntlid range [6-5]" 00:13:04.877 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.877 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:05.135 { 00:13:05.135 "name": "foobar", 00:13:05.135 "method": "nvmf_delete_target", 00:13:05.135 "req_id": 1 00:13:05.135 } 00:13:05.135 Got JSON-RPC error response 00:13:05.135 response: 00:13:05.135 { 00:13:05.135 "code": -32602, 00:13:05.135 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:05.135 }' 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:05.135 { 00:13:05.135 "name": "foobar", 00:13:05.135 "method": "nvmf_delete_target", 00:13:05.135 "req_id": 1 00:13:05.135 } 00:13:05.135 Got JSON-RPC error response 00:13:05.135 response: 00:13:05.135 { 00:13:05.135 "code": -32602, 00:13:05.135 "message": "The specified target doesn't exist, cannot delete it." 00:13:05.135 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.135 rmmod nvme_tcp 00:13:05.135 rmmod nvme_fabrics 00:13:05.135 rmmod nvme_keyring 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 486634 ']' 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 486634 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 486634 ']' 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 486634 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 486634 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 486634' 00:13:05.135 killing process with pid 486634 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 486634 00:13:05.135 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 486634 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.418 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.435 00:13:07.435 real 0m9.250s 00:13:07.435 user 0m22.955s 00:13:07.435 sys 0m2.374s 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.435 ************************************ 00:13:07.435 END TEST nvmf_invalid 00:13:07.435 ************************************ 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.435 ************************************ 00:13:07.435 START TEST nvmf_connect_stress 00:13:07.435 ************************************ 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:07.435 * Looking for test storage... 00:13:07.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:07.435 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.436 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.695 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.695 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.695 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.695 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.595 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:09.596 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:09.596 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:09.596 Found net devices under 0000:82:00.0: cvl_0_0 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:09.596 Found net devices under 0000:82:00.1: cvl_0_1 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.596 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:13:09.597 00:13:09.597 --- 10.0.0.2 ping statistics --- 00:13:09.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.597 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:09.597 00:13:09.597 --- 10.0.0.1 ping statistics --- 00:13:09.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.597 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=489281 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 489281 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 489281 ']' 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.597 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.597 [2024-07-25 09:27:42.301448] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:09.597 [2024-07-25 09:27:42.301542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.855 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.855 [2024-07-25 09:27:42.370111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.855 [2024-07-25 09:27:42.480332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.855 [2024-07-25 09:27:42.480404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.855 [2024-07-25 09:27:42.480435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.855 [2024-07-25 09:27:42.480446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.855 [2024-07-25 09:27:42.480457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.855 [2024-07-25 09:27:42.480513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.855 [2024-07-25 09:27:42.480575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.855 [2024-07-25 09:27:42.480579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.112 [2024-07-25 09:27:42.621994] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.112 [2024-07-25 09:27:42.652454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.112 NULL1 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=489313 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.112 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.113 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.370 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.370 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:10.370 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.370 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.370 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.627 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.627 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:10.627 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.627 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.627 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.189 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.189 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:11.189 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.189 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.189 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.445 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.445 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:11.445 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.445 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.445 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.702 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.702 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:11.702 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.702 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.702 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.960 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.960 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:11.960 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.960 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.960 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.524 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.524 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:12.524 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.524 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.524 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.782 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.782 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:12.782 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.782 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.782 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.039 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.039 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:13.039 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.039 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.039 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.296 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.296 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:13.296 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.296 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.296 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.554 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.554 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:13.554 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.554 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.554 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.119 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.119 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:14.119 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.119 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.119 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.376 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.376 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:14.376 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.376 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.376 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.634 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.634 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:14.634 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.634 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.634 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.891 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.891 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:14.891 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.891 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.891 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.149 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:15.149 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.149 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.149 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.713 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.713 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:15.713 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.714 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.714 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.971 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.971 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:15.971 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.971 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.971 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.229 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.229 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:16.229 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.229 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.229 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.487 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.487 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:16.487 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.487 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.487 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.745 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.745 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:16.745 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.745 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.745 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.310 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.310 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:17.310 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.310 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.310 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.568 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:17.568 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.568 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.568 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.826 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.826 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:17.826 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.826 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.826 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.083 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.083 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:18.083 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.083 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.083 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.341 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.341 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:18.341 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.341 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.341 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.905 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.905 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:18.905 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.905 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.905 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.163 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.163 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:19.163 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.163 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.163 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.421 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.421 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:19.421 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.421 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.421 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.678 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.678 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:19.678 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.678 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.678 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.244 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.244 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:20.244 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.244 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.244 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.244 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489313 00:13:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (489313) - No such process 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 489313 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.502 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.502 rmmod nvme_tcp 00:13:20.502 rmmod nvme_fabrics 00:13:20.502 rmmod nvme_keyring 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 489281 ']' 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 489281 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 489281 ']' 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 489281 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 489281 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 489281' 00:13:20.502 killing process with pid 489281 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 489281 00:13:20.502 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 489281 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.761 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.666 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.666 00:13:22.666 real 0m15.280s 00:13:22.666 user 0m39.694s 00:13:22.666 sys 0m4.657s 00:13:22.666 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.666 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.666 ************************************ 00:13:22.666 END TEST nvmf_connect_stress 00:13:22.666 ************************************ 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.925 ************************************ 00:13:22.925 START TEST nvmf_fused_ordering 00:13:22.925 ************************************ 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:22.925 * Looking for test storage... 00:13:22.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.925 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.926 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.830 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:24.831 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:24.831 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:24.831 Found net devices under 0000:82:00.0: cvl_0_0 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:24.831 Found net devices under 0000:82:00.1: cvl_0_1 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.831 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:13:25.090 00:13:25.090 --- 10.0.0.2 ping statistics --- 00:13:25.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.090 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:13:25.090 00:13:25.090 --- 10.0.0.1 ping statistics --- 00:13:25.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.090 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=492561 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 492561 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 492561 ']' 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.090 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.090 [2024-07-25 09:27:57.738182] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:25.090 [2024-07-25 09:27:57.738271] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.090 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.090 [2024-07-25 09:27:57.808639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.348 [2024-07-25 09:27:57.928051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.348 [2024-07-25 09:27:57.928122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.348 [2024-07-25 09:27:57.928139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.348 [2024-07-25 09:27:57.928153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.348 [2024-07-25 09:27:57.928165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.348 [2024-07-25 09:27:57.928204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 [2024-07-25 09:27:58.729741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 [2024-07-25 09:27:58.745894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 NULL1 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.282 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.282 [2024-07-25 09:27:58.789966] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:26.282 [2024-07-25 09:27:58.790002] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492676 ] 00:13:26.282 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.540 Attached to nqn.2016-06.io.spdk:cnode1 00:13:26.540 Namespace ID: 1 size: 1GB 00:13:26.540 fused_ordering(0) 00:13:26.540 fused_ordering(1) 00:13:26.540 fused_ordering(2) 00:13:26.540 fused_ordering(3) 00:13:26.540 fused_ordering(4) 00:13:26.540 fused_ordering(5) 00:13:26.540 fused_ordering(6) 00:13:26.540 fused_ordering(7) 00:13:26.540 fused_ordering(8) 00:13:26.540 fused_ordering(9) 00:13:26.540 fused_ordering(10) 00:13:26.540 fused_ordering(11) 00:13:26.540 fused_ordering(12) 00:13:26.540 fused_ordering(13) 00:13:26.540 fused_ordering(14) 00:13:26.540 fused_ordering(15) 00:13:26.540 fused_ordering(16) 00:13:26.540 fused_ordering(17) 00:13:26.540 fused_ordering(18) 00:13:26.540 fused_ordering(19) 00:13:26.540 fused_ordering(20) 00:13:26.540 fused_ordering(21) 00:13:26.540 fused_ordering(22) 00:13:26.540 fused_ordering(23) 00:13:26.540 fused_ordering(24) 00:13:26.540 fused_ordering(25) 00:13:26.540 fused_ordering(26) 00:13:26.540 fused_ordering(27) 00:13:26.540 fused_ordering(28) 00:13:26.540 fused_ordering(29) 00:13:26.540 fused_ordering(30) 00:13:26.540 fused_ordering(31) 00:13:26.540 fused_ordering(32) 00:13:26.540 fused_ordering(33) 00:13:26.540 fused_ordering(34) 00:13:26.540 fused_ordering(35) 00:13:26.540 fused_ordering(36) 00:13:26.540 fused_ordering(37) 00:13:26.540 fused_ordering(38) 00:13:26.540 fused_ordering(39) 00:13:26.540 fused_ordering(40) 00:13:26.540 fused_ordering(41) 00:13:26.540 fused_ordering(42) 00:13:26.540 fused_ordering(43) 00:13:26.540 fused_ordering(44) 00:13:26.540 fused_ordering(45) 00:13:26.540 fused_ordering(46) 00:13:26.540 fused_ordering(47) 00:13:26.540 fused_ordering(48) 00:13:26.540 fused_ordering(49) 00:13:26.540 fused_ordering(50) 00:13:26.540 fused_ordering(51) 00:13:26.540 fused_ordering(52) 00:13:26.540 fused_ordering(53) 00:13:26.540 fused_ordering(54) 00:13:26.540 fused_ordering(55) 00:13:26.540 fused_ordering(56) 00:13:26.540 fused_ordering(57) 00:13:26.540 fused_ordering(58) 00:13:26.540 fused_ordering(59) 00:13:26.540 fused_ordering(60) 00:13:26.540 fused_ordering(61) 00:13:26.540 fused_ordering(62) 00:13:26.540 fused_ordering(63) 00:13:26.540 fused_ordering(64) 00:13:26.541 fused_ordering(65) 00:13:26.541 fused_ordering(66) 00:13:26.541 fused_ordering(67) 00:13:26.541 fused_ordering(68) 00:13:26.541 fused_ordering(69) 00:13:26.541 fused_ordering(70) 00:13:26.541 fused_ordering(71) 00:13:26.541 fused_ordering(72) 00:13:26.541 fused_ordering(73) 00:13:26.541 fused_ordering(74) 00:13:26.541 fused_ordering(75) 00:13:26.541 fused_ordering(76) 00:13:26.541 fused_ordering(77) 00:13:26.541 fused_ordering(78) 00:13:26.541 fused_ordering(79) 00:13:26.541 fused_ordering(80) 00:13:26.541 fused_ordering(81) 00:13:26.541 fused_ordering(82) 00:13:26.541 fused_ordering(83) 00:13:26.541 fused_ordering(84) 00:13:26.541 fused_ordering(85) 00:13:26.541 fused_ordering(86) 00:13:26.541 fused_ordering(87) 00:13:26.541 fused_ordering(88) 00:13:26.541 fused_ordering(89) 00:13:26.541 fused_ordering(90) 00:13:26.541 fused_ordering(91) 00:13:26.541 fused_ordering(92) 00:13:26.541 fused_ordering(93) 00:13:26.541 fused_ordering(94) 00:13:26.541 fused_ordering(95) 00:13:26.541 fused_ordering(96) 00:13:26.541 fused_ordering(97) 00:13:26.541 fused_ordering(98) 00:13:26.541 fused_ordering(99) 00:13:26.541 fused_ordering(100) 00:13:26.541 fused_ordering(101) 00:13:26.541 fused_ordering(102) 00:13:26.541 fused_ordering(103) 00:13:26.541 fused_ordering(104) 00:13:26.541 fused_ordering(105) 00:13:26.541 fused_ordering(106) 00:13:26.541 fused_ordering(107) 00:13:26.541 fused_ordering(108) 00:13:26.541 fused_ordering(109) 00:13:26.541 fused_ordering(110) 00:13:26.541 fused_ordering(111) 00:13:26.541 fused_ordering(112) 00:13:26.541 fused_ordering(113) 00:13:26.541 fused_ordering(114) 00:13:26.541 fused_ordering(115) 00:13:26.541 fused_ordering(116) 00:13:26.541 fused_ordering(117) 00:13:26.541 fused_ordering(118) 00:13:26.541 fused_ordering(119) 00:13:26.541 fused_ordering(120) 00:13:26.541 fused_ordering(121) 00:13:26.541 fused_ordering(122) 00:13:26.541 fused_ordering(123) 00:13:26.541 fused_ordering(124) 00:13:26.541 fused_ordering(125) 00:13:26.541 fused_ordering(126) 00:13:26.541 fused_ordering(127) 00:13:26.541 fused_ordering(128) 00:13:26.541 fused_ordering(129) 00:13:26.541 fused_ordering(130) 00:13:26.541 fused_ordering(131) 00:13:26.541 fused_ordering(132) 00:13:26.541 fused_ordering(133) 00:13:26.541 fused_ordering(134) 00:13:26.541 fused_ordering(135) 00:13:26.541 fused_ordering(136) 00:13:26.541 fused_ordering(137) 00:13:26.541 fused_ordering(138) 00:13:26.541 fused_ordering(139) 00:13:26.541 fused_ordering(140) 00:13:26.541 fused_ordering(141) 00:13:26.541 fused_ordering(142) 00:13:26.541 fused_ordering(143) 00:13:26.541 fused_ordering(144) 00:13:26.541 fused_ordering(145) 00:13:26.541 fused_ordering(146) 00:13:26.541 fused_ordering(147) 00:13:26.541 fused_ordering(148) 00:13:26.541 fused_ordering(149) 00:13:26.541 fused_ordering(150) 00:13:26.541 fused_ordering(151) 00:13:26.541 fused_ordering(152) 00:13:26.541 fused_ordering(153) 00:13:26.541 fused_ordering(154) 00:13:26.541 fused_ordering(155) 00:13:26.541 fused_ordering(156) 00:13:26.541 fused_ordering(157) 00:13:26.541 fused_ordering(158) 00:13:26.541 fused_ordering(159) 00:13:26.541 fused_ordering(160) 00:13:26.541 fused_ordering(161) 00:13:26.541 fused_ordering(162) 00:13:26.541 fused_ordering(163) 00:13:26.541 fused_ordering(164) 00:13:26.541 fused_ordering(165) 00:13:26.541 fused_ordering(166) 00:13:26.541 fused_ordering(167) 00:13:26.541 fused_ordering(168) 00:13:26.541 fused_ordering(169) 00:13:26.541 fused_ordering(170) 00:13:26.541 fused_ordering(171) 00:13:26.541 fused_ordering(172) 00:13:26.541 fused_ordering(173) 00:13:26.541 fused_ordering(174) 00:13:26.541 fused_ordering(175) 00:13:26.541 fused_ordering(176) 00:13:26.541 fused_ordering(177) 00:13:26.541 fused_ordering(178) 00:13:26.541 fused_ordering(179) 00:13:26.541 fused_ordering(180) 00:13:26.541 fused_ordering(181) 00:13:26.541 fused_ordering(182) 00:13:26.541 fused_ordering(183) 00:13:26.541 fused_ordering(184) 00:13:26.541 fused_ordering(185) 00:13:26.541 fused_ordering(186) 00:13:26.541 fused_ordering(187) 00:13:26.541 fused_ordering(188) 00:13:26.541 fused_ordering(189) 00:13:26.541 fused_ordering(190) 00:13:26.541 fused_ordering(191) 00:13:26.541 fused_ordering(192) 00:13:26.541 fused_ordering(193) 00:13:26.541 fused_ordering(194) 00:13:26.541 fused_ordering(195) 00:13:26.541 fused_ordering(196) 00:13:26.541 fused_ordering(197) 00:13:26.541 fused_ordering(198) 00:13:26.541 fused_ordering(199) 00:13:26.541 fused_ordering(200) 00:13:26.541 fused_ordering(201) 00:13:26.541 fused_ordering(202) 00:13:26.541 fused_ordering(203) 00:13:26.541 fused_ordering(204) 00:13:26.541 fused_ordering(205) 00:13:27.107 fused_ordering(206) 00:13:27.107 fused_ordering(207) 00:13:27.107 fused_ordering(208) 00:13:27.107 fused_ordering(209) 00:13:27.107 fused_ordering(210) 00:13:27.107 fused_ordering(211) 00:13:27.107 fused_ordering(212) 00:13:27.107 fused_ordering(213) 00:13:27.107 fused_ordering(214) 00:13:27.107 fused_ordering(215) 00:13:27.107 fused_ordering(216) 00:13:27.107 fused_ordering(217) 00:13:27.107 fused_ordering(218) 00:13:27.107 fused_ordering(219) 00:13:27.107 fused_ordering(220) 00:13:27.107 fused_ordering(221) 00:13:27.107 fused_ordering(222) 00:13:27.107 fused_ordering(223) 00:13:27.107 fused_ordering(224) 00:13:27.107 fused_ordering(225) 00:13:27.107 fused_ordering(226) 00:13:27.107 fused_ordering(227) 00:13:27.107 fused_ordering(228) 00:13:27.107 fused_ordering(229) 00:13:27.107 fused_ordering(230) 00:13:27.107 fused_ordering(231) 00:13:27.107 fused_ordering(232) 00:13:27.107 fused_ordering(233) 00:13:27.107 fused_ordering(234) 00:13:27.107 fused_ordering(235) 00:13:27.107 fused_ordering(236) 00:13:27.107 fused_ordering(237) 00:13:27.107 fused_ordering(238) 00:13:27.107 fused_ordering(239) 00:13:27.107 fused_ordering(240) 00:13:27.107 fused_ordering(241) 00:13:27.107 fused_ordering(242) 00:13:27.107 fused_ordering(243) 00:13:27.107 fused_ordering(244) 00:13:27.107 fused_ordering(245) 00:13:27.107 fused_ordering(246) 00:13:27.107 fused_ordering(247) 00:13:27.107 fused_ordering(248) 00:13:27.107 fused_ordering(249) 00:13:27.107 fused_ordering(250) 00:13:27.107 fused_ordering(251) 00:13:27.107 fused_ordering(252) 00:13:27.107 fused_ordering(253) 00:13:27.107 fused_ordering(254) 00:13:27.107 fused_ordering(255) 00:13:27.107 fused_ordering(256) 00:13:27.107 fused_ordering(257) 00:13:27.107 fused_ordering(258) 00:13:27.107 fused_ordering(259) 00:13:27.107 fused_ordering(260) 00:13:27.107 fused_ordering(261) 00:13:27.107 fused_ordering(262) 00:13:27.107 fused_ordering(263) 00:13:27.107 fused_ordering(264) 00:13:27.107 fused_ordering(265) 00:13:27.107 fused_ordering(266) 00:13:27.107 fused_ordering(267) 00:13:27.107 fused_ordering(268) 00:13:27.107 fused_ordering(269) 00:13:27.107 fused_ordering(270) 00:13:27.107 fused_ordering(271) 00:13:27.107 fused_ordering(272) 00:13:27.107 fused_ordering(273) 00:13:27.107 fused_ordering(274) 00:13:27.107 fused_ordering(275) 00:13:27.107 fused_ordering(276) 00:13:27.107 fused_ordering(277) 00:13:27.107 fused_ordering(278) 00:13:27.107 fused_ordering(279) 00:13:27.107 fused_ordering(280) 00:13:27.107 fused_ordering(281) 00:13:27.107 fused_ordering(282) 00:13:27.107 fused_ordering(283) 00:13:27.107 fused_ordering(284) 00:13:27.107 fused_ordering(285) 00:13:27.107 fused_ordering(286) 00:13:27.107 fused_ordering(287) 00:13:27.107 fused_ordering(288) 00:13:27.107 fused_ordering(289) 00:13:27.107 fused_ordering(290) 00:13:27.107 fused_ordering(291) 00:13:27.107 fused_ordering(292) 00:13:27.107 fused_ordering(293) 00:13:27.107 fused_ordering(294) 00:13:27.107 fused_ordering(295) 00:13:27.107 fused_ordering(296) 00:13:27.107 fused_ordering(297) 00:13:27.107 fused_ordering(298) 00:13:27.107 fused_ordering(299) 00:13:27.107 fused_ordering(300) 00:13:27.107 fused_ordering(301) 00:13:27.107 fused_ordering(302) 00:13:27.107 fused_ordering(303) 00:13:27.107 fused_ordering(304) 00:13:27.107 fused_ordering(305) 00:13:27.107 fused_ordering(306) 00:13:27.107 fused_ordering(307) 00:13:27.107 fused_ordering(308) 00:13:27.107 fused_ordering(309) 00:13:27.107 fused_ordering(310) 00:13:27.107 fused_ordering(311) 00:13:27.107 fused_ordering(312) 00:13:27.107 fused_ordering(313) 00:13:27.107 fused_ordering(314) 00:13:27.107 fused_ordering(315) 00:13:27.107 fused_ordering(316) 00:13:27.107 fused_ordering(317) 00:13:27.107 fused_ordering(318) 00:13:27.107 fused_ordering(319) 00:13:27.107 fused_ordering(320) 00:13:27.107 fused_ordering(321) 00:13:27.107 fused_ordering(322) 00:13:27.107 fused_ordering(323) 00:13:27.107 fused_ordering(324) 00:13:27.107 fused_ordering(325) 00:13:27.107 fused_ordering(326) 00:13:27.107 fused_ordering(327) 00:13:27.107 fused_ordering(328) 00:13:27.107 fused_ordering(329) 00:13:27.107 fused_ordering(330) 00:13:27.107 fused_ordering(331) 00:13:27.107 fused_ordering(332) 00:13:27.107 fused_ordering(333) 00:13:27.107 fused_ordering(334) 00:13:27.107 fused_ordering(335) 00:13:27.107 fused_ordering(336) 00:13:27.107 fused_ordering(337) 00:13:27.107 fused_ordering(338) 00:13:27.107 fused_ordering(339) 00:13:27.107 fused_ordering(340) 00:13:27.107 fused_ordering(341) 00:13:27.107 fused_ordering(342) 00:13:27.107 fused_ordering(343) 00:13:27.107 fused_ordering(344) 00:13:27.107 fused_ordering(345) 00:13:27.107 fused_ordering(346) 00:13:27.107 fused_ordering(347) 00:13:27.107 fused_ordering(348) 00:13:27.107 fused_ordering(349) 00:13:27.107 fused_ordering(350) 00:13:27.107 fused_ordering(351) 00:13:27.107 fused_ordering(352) 00:13:27.107 fused_ordering(353) 00:13:27.107 fused_ordering(354) 00:13:27.107 fused_ordering(355) 00:13:27.107 fused_ordering(356) 00:13:27.107 fused_ordering(357) 00:13:27.107 fused_ordering(358) 00:13:27.107 fused_ordering(359) 00:13:27.107 fused_ordering(360) 00:13:27.107 fused_ordering(361) 00:13:27.107 fused_ordering(362) 00:13:27.107 fused_ordering(363) 00:13:27.107 fused_ordering(364) 00:13:27.107 fused_ordering(365) 00:13:27.107 fused_ordering(366) 00:13:27.107 fused_ordering(367) 00:13:27.107 fused_ordering(368) 00:13:27.107 fused_ordering(369) 00:13:27.107 fused_ordering(370) 00:13:27.107 fused_ordering(371) 00:13:27.107 fused_ordering(372) 00:13:27.107 fused_ordering(373) 00:13:27.107 fused_ordering(374) 00:13:27.107 fused_ordering(375) 00:13:27.107 fused_ordering(376) 00:13:27.107 fused_ordering(377) 00:13:27.107 fused_ordering(378) 00:13:27.107 fused_ordering(379) 00:13:27.107 fused_ordering(380) 00:13:27.107 fused_ordering(381) 00:13:27.107 fused_ordering(382) 00:13:27.107 fused_ordering(383) 00:13:27.108 fused_ordering(384) 00:13:27.108 fused_ordering(385) 00:13:27.108 fused_ordering(386) 00:13:27.108 fused_ordering(387) 00:13:27.108 fused_ordering(388) 00:13:27.108 fused_ordering(389) 00:13:27.108 fused_ordering(390) 00:13:27.108 fused_ordering(391) 00:13:27.108 fused_ordering(392) 00:13:27.108 fused_ordering(393) 00:13:27.108 fused_ordering(394) 00:13:27.108 fused_ordering(395) 00:13:27.108 fused_ordering(396) 00:13:27.108 fused_ordering(397) 00:13:27.108 fused_ordering(398) 00:13:27.108 fused_ordering(399) 00:13:27.108 fused_ordering(400) 00:13:27.108 fused_ordering(401) 00:13:27.108 fused_ordering(402) 00:13:27.108 fused_ordering(403) 00:13:27.108 fused_ordering(404) 00:13:27.108 fused_ordering(405) 00:13:27.108 fused_ordering(406) 00:13:27.108 fused_ordering(407) 00:13:27.108 fused_ordering(408) 00:13:27.108 fused_ordering(409) 00:13:27.108 fused_ordering(410) 00:13:27.366 fused_ordering(411) 00:13:27.366 fused_ordering(412) 00:13:27.366 fused_ordering(413) 00:13:27.366 fused_ordering(414) 00:13:27.366 fused_ordering(415) 00:13:27.366 fused_ordering(416) 00:13:27.366 fused_ordering(417) 00:13:27.366 fused_ordering(418) 00:13:27.366 fused_ordering(419) 00:13:27.366 fused_ordering(420) 00:13:27.366 fused_ordering(421) 00:13:27.366 fused_ordering(422) 00:13:27.366 fused_ordering(423) 00:13:27.366 fused_ordering(424) 00:13:27.366 fused_ordering(425) 00:13:27.366 fused_ordering(426) 00:13:27.366 fused_ordering(427) 00:13:27.366 fused_ordering(428) 00:13:27.366 fused_ordering(429) 00:13:27.366 fused_ordering(430) 00:13:27.366 fused_ordering(431) 00:13:27.366 fused_ordering(432) 00:13:27.366 fused_ordering(433) 00:13:27.366 fused_ordering(434) 00:13:27.366 fused_ordering(435) 00:13:27.366 fused_ordering(436) 00:13:27.366 fused_ordering(437) 00:13:27.366 fused_ordering(438) 00:13:27.366 fused_ordering(439) 00:13:27.366 fused_ordering(440) 00:13:27.366 fused_ordering(441) 00:13:27.366 fused_ordering(442) 00:13:27.366 fused_ordering(443) 00:13:27.366 fused_ordering(444) 00:13:27.366 fused_ordering(445) 00:13:27.366 fused_ordering(446) 00:13:27.366 fused_ordering(447) 00:13:27.366 fused_ordering(448) 00:13:27.366 fused_ordering(449) 00:13:27.366 fused_ordering(450) 00:13:27.366 fused_ordering(451) 00:13:27.366 fused_ordering(452) 00:13:27.366 fused_ordering(453) 00:13:27.366 fused_ordering(454) 00:13:27.366 fused_ordering(455) 00:13:27.366 fused_ordering(456) 00:13:27.366 fused_ordering(457) 00:13:27.366 fused_ordering(458) 00:13:27.366 fused_ordering(459) 00:13:27.366 fused_ordering(460) 00:13:27.366 fused_ordering(461) 00:13:27.366 fused_ordering(462) 00:13:27.366 fused_ordering(463) 00:13:27.366 fused_ordering(464) 00:13:27.366 fused_ordering(465) 00:13:27.366 fused_ordering(466) 00:13:27.366 fused_ordering(467) 00:13:27.366 fused_ordering(468) 00:13:27.366 fused_ordering(469) 00:13:27.366 fused_ordering(470) 00:13:27.366 fused_ordering(471) 00:13:27.366 fused_ordering(472) 00:13:27.366 fused_ordering(473) 00:13:27.366 fused_ordering(474) 00:13:27.366 fused_ordering(475) 00:13:27.366 fused_ordering(476) 00:13:27.366 fused_ordering(477) 00:13:27.366 fused_ordering(478) 00:13:27.366 fused_ordering(479) 00:13:27.366 fused_ordering(480) 00:13:27.366 fused_ordering(481) 00:13:27.366 fused_ordering(482) 00:13:27.366 fused_ordering(483) 00:13:27.366 fused_ordering(484) 00:13:27.366 fused_ordering(485) 00:13:27.366 fused_ordering(486) 00:13:27.366 fused_ordering(487) 00:13:27.366 fused_ordering(488) 00:13:27.366 fused_ordering(489) 00:13:27.366 fused_ordering(490) 00:13:27.366 fused_ordering(491) 00:13:27.366 fused_ordering(492) 00:13:27.366 fused_ordering(493) 00:13:27.366 fused_ordering(494) 00:13:27.366 fused_ordering(495) 00:13:27.366 fused_ordering(496) 00:13:27.366 fused_ordering(497) 00:13:27.366 fused_ordering(498) 00:13:27.366 fused_ordering(499) 00:13:27.366 fused_ordering(500) 00:13:27.366 fused_ordering(501) 00:13:27.366 fused_ordering(502) 00:13:27.366 fused_ordering(503) 00:13:27.366 fused_ordering(504) 00:13:27.366 fused_ordering(505) 00:13:27.366 fused_ordering(506) 00:13:27.366 fused_ordering(507) 00:13:27.366 fused_ordering(508) 00:13:27.366 fused_ordering(509) 00:13:27.366 fused_ordering(510) 00:13:27.366 fused_ordering(511) 00:13:27.366 fused_ordering(512) 00:13:27.366 fused_ordering(513) 00:13:27.366 fused_ordering(514) 00:13:27.366 fused_ordering(515) 00:13:27.366 fused_ordering(516) 00:13:27.366 fused_ordering(517) 00:13:27.366 fused_ordering(518) 00:13:27.366 fused_ordering(519) 00:13:27.366 fused_ordering(520) 00:13:27.366 fused_ordering(521) 00:13:27.366 fused_ordering(522) 00:13:27.366 fused_ordering(523) 00:13:27.366 fused_ordering(524) 00:13:27.366 fused_ordering(525) 00:13:27.366 fused_ordering(526) 00:13:27.366 fused_ordering(527) 00:13:27.366 fused_ordering(528) 00:13:27.366 fused_ordering(529) 00:13:27.366 fused_ordering(530) 00:13:27.366 fused_ordering(531) 00:13:27.366 fused_ordering(532) 00:13:27.366 fused_ordering(533) 00:13:27.366 fused_ordering(534) 00:13:27.366 fused_ordering(535) 00:13:27.366 fused_ordering(536) 00:13:27.366 fused_ordering(537) 00:13:27.366 fused_ordering(538) 00:13:27.366 fused_ordering(539) 00:13:27.366 fused_ordering(540) 00:13:27.366 fused_ordering(541) 00:13:27.367 fused_ordering(542) 00:13:27.367 fused_ordering(543) 00:13:27.367 fused_ordering(544) 00:13:27.367 fused_ordering(545) 00:13:27.367 fused_ordering(546) 00:13:27.367 fused_ordering(547) 00:13:27.367 fused_ordering(548) 00:13:27.367 fused_ordering(549) 00:13:27.367 fused_ordering(550) 00:13:27.367 fused_ordering(551) 00:13:27.367 fused_ordering(552) 00:13:27.367 fused_ordering(553) 00:13:27.367 fused_ordering(554) 00:13:27.367 fused_ordering(555) 00:13:27.367 fused_ordering(556) 00:13:27.367 fused_ordering(557) 00:13:27.367 fused_ordering(558) 00:13:27.367 fused_ordering(559) 00:13:27.367 fused_ordering(560) 00:13:27.367 fused_ordering(561) 00:13:27.367 fused_ordering(562) 00:13:27.367 fused_ordering(563) 00:13:27.367 fused_ordering(564) 00:13:27.367 fused_ordering(565) 00:13:27.367 fused_ordering(566) 00:13:27.367 fused_ordering(567) 00:13:27.367 fused_ordering(568) 00:13:27.367 fused_ordering(569) 00:13:27.367 fused_ordering(570) 00:13:27.367 fused_ordering(571) 00:13:27.367 fused_ordering(572) 00:13:27.367 fused_ordering(573) 00:13:27.367 fused_ordering(574) 00:13:27.367 fused_ordering(575) 00:13:27.367 fused_ordering(576) 00:13:27.367 fused_ordering(577) 00:13:27.367 fused_ordering(578) 00:13:27.367 fused_ordering(579) 00:13:27.367 fused_ordering(580) 00:13:27.367 fused_ordering(581) 00:13:27.367 fused_ordering(582) 00:13:27.367 fused_ordering(583) 00:13:27.367 fused_ordering(584) 00:13:27.367 fused_ordering(585) 00:13:27.367 fused_ordering(586) 00:13:27.367 fused_ordering(587) 00:13:27.367 fused_ordering(588) 00:13:27.367 fused_ordering(589) 00:13:27.367 fused_ordering(590) 00:13:27.367 fused_ordering(591) 00:13:27.367 fused_ordering(592) 00:13:27.367 fused_ordering(593) 00:13:27.367 fused_ordering(594) 00:13:27.367 fused_ordering(595) 00:13:27.367 fused_ordering(596) 00:13:27.367 fused_ordering(597) 00:13:27.367 fused_ordering(598) 00:13:27.367 fused_ordering(599) 00:13:27.367 fused_ordering(600) 00:13:27.367 fused_ordering(601) 00:13:27.367 fused_ordering(602) 00:13:27.367 fused_ordering(603) 00:13:27.367 fused_ordering(604) 00:13:27.367 fused_ordering(605) 00:13:27.367 fused_ordering(606) 00:13:27.367 fused_ordering(607) 00:13:27.367 fused_ordering(608) 00:13:27.367 fused_ordering(609) 00:13:27.367 fused_ordering(610) 00:13:27.367 fused_ordering(611) 00:13:27.367 fused_ordering(612) 00:13:27.367 fused_ordering(613) 00:13:27.367 fused_ordering(614) 00:13:27.367 fused_ordering(615) 00:13:27.933 fused_ordering(616) 00:13:27.933 fused_ordering(617) 00:13:27.933 fused_ordering(618) 00:13:27.933 fused_ordering(619) 00:13:27.933 fused_ordering(620) 00:13:27.933 fused_ordering(621) 00:13:27.933 fused_ordering(622) 00:13:27.933 fused_ordering(623) 00:13:27.933 fused_ordering(624) 00:13:27.933 fused_ordering(625) 00:13:27.933 fused_ordering(626) 00:13:27.933 fused_ordering(627) 00:13:27.933 fused_ordering(628) 00:13:27.933 fused_ordering(629) 00:13:27.933 fused_ordering(630) 00:13:27.933 fused_ordering(631) 00:13:27.933 fused_ordering(632) 00:13:27.933 fused_ordering(633) 00:13:27.933 fused_ordering(634) 00:13:27.933 fused_ordering(635) 00:13:27.933 fused_ordering(636) 00:13:27.933 fused_ordering(637) 00:13:27.933 fused_ordering(638) 00:13:27.933 fused_ordering(639) 00:13:27.933 fused_ordering(640) 00:13:27.933 fused_ordering(641) 00:13:27.933 fused_ordering(642) 00:13:27.933 fused_ordering(643) 00:13:27.933 fused_ordering(644) 00:13:27.933 fused_ordering(645) 00:13:27.933 fused_ordering(646) 00:13:27.933 fused_ordering(647) 00:13:27.933 fused_ordering(648) 00:13:27.933 fused_ordering(649) 00:13:27.933 fused_ordering(650) 00:13:27.933 fused_ordering(651) 00:13:27.933 fused_ordering(652) 00:13:27.933 fused_ordering(653) 00:13:27.933 fused_ordering(654) 00:13:27.933 fused_ordering(655) 00:13:27.933 fused_ordering(656) 00:13:27.933 fused_ordering(657) 00:13:27.933 fused_ordering(658) 00:13:27.933 fused_ordering(659) 00:13:27.933 fused_ordering(660) 00:13:27.933 fused_ordering(661) 00:13:27.933 fused_ordering(662) 00:13:27.933 fused_ordering(663) 00:13:27.933 fused_ordering(664) 00:13:27.933 fused_ordering(665) 00:13:27.933 fused_ordering(666) 00:13:27.933 fused_ordering(667) 00:13:27.933 fused_ordering(668) 00:13:27.933 fused_ordering(669) 00:13:27.933 fused_ordering(670) 00:13:27.933 fused_ordering(671) 00:13:27.933 fused_ordering(672) 00:13:27.933 fused_ordering(673) 00:13:27.933 fused_ordering(674) 00:13:27.933 fused_ordering(675) 00:13:27.933 fused_ordering(676) 00:13:27.933 fused_ordering(677) 00:13:27.933 fused_ordering(678) 00:13:27.933 fused_ordering(679) 00:13:27.933 fused_ordering(680) 00:13:27.933 fused_ordering(681) 00:13:27.933 fused_ordering(682) 00:13:27.933 fused_ordering(683) 00:13:27.933 fused_ordering(684) 00:13:27.933 fused_ordering(685) 00:13:27.933 fused_ordering(686) 00:13:27.933 fused_ordering(687) 00:13:27.933 fused_ordering(688) 00:13:27.933 fused_ordering(689) 00:13:27.933 fused_ordering(690) 00:13:27.933 fused_ordering(691) 00:13:27.933 fused_ordering(692) 00:13:27.933 fused_ordering(693) 00:13:27.933 fused_ordering(694) 00:13:27.933 fused_ordering(695) 00:13:27.933 fused_ordering(696) 00:13:27.933 fused_ordering(697) 00:13:27.933 fused_ordering(698) 00:13:27.933 fused_ordering(699) 00:13:27.933 fused_ordering(700) 00:13:27.933 fused_ordering(701) 00:13:27.934 fused_ordering(702) 00:13:27.934 fused_ordering(703) 00:13:27.934 fused_ordering(704) 00:13:27.934 fused_ordering(705) 00:13:27.934 fused_ordering(706) 00:13:27.934 fused_ordering(707) 00:13:27.934 fused_ordering(708) 00:13:27.934 fused_ordering(709) 00:13:27.934 fused_ordering(710) 00:13:27.934 fused_ordering(711) 00:13:27.934 fused_ordering(712) 00:13:27.934 fused_ordering(713) 00:13:27.934 fused_ordering(714) 00:13:27.934 fused_ordering(715) 00:13:27.934 fused_ordering(716) 00:13:27.934 fused_ordering(717) 00:13:27.934 fused_ordering(718) 00:13:27.934 fused_ordering(719) 00:13:27.934 fused_ordering(720) 00:13:27.934 fused_ordering(721) 00:13:27.934 fused_ordering(722) 00:13:27.934 fused_ordering(723) 00:13:27.934 fused_ordering(724) 00:13:27.934 fused_ordering(725) 00:13:27.934 fused_ordering(726) 00:13:27.934 fused_ordering(727) 00:13:27.934 fused_ordering(728) 00:13:27.934 fused_ordering(729) 00:13:27.934 fused_ordering(730) 00:13:27.934 fused_ordering(731) 00:13:27.934 fused_ordering(732) 00:13:27.934 fused_ordering(733) 00:13:27.934 fused_ordering(734) 00:13:27.934 fused_ordering(735) 00:13:27.934 fused_ordering(736) 00:13:27.934 fused_ordering(737) 00:13:27.934 fused_ordering(738) 00:13:27.934 fused_ordering(739) 00:13:27.934 fused_ordering(740) 00:13:27.934 fused_ordering(741) 00:13:27.934 fused_ordering(742) 00:13:27.934 fused_ordering(743) 00:13:27.934 fused_ordering(744) 00:13:27.934 fused_ordering(745) 00:13:27.934 fused_ordering(746) 00:13:27.934 fused_ordering(747) 00:13:27.934 fused_ordering(748) 00:13:27.934 fused_ordering(749) 00:13:27.934 fused_ordering(750) 00:13:27.934 fused_ordering(751) 00:13:27.934 fused_ordering(752) 00:13:27.934 fused_ordering(753) 00:13:27.934 fused_ordering(754) 00:13:27.934 fused_ordering(755) 00:13:27.934 fused_ordering(756) 00:13:27.934 fused_ordering(757) 00:13:27.934 fused_ordering(758) 00:13:27.934 fused_ordering(759) 00:13:27.934 fused_ordering(760) 00:13:27.934 fused_ordering(761) 00:13:27.934 fused_ordering(762) 00:13:27.934 fused_ordering(763) 00:13:27.934 fused_ordering(764) 00:13:27.934 fused_ordering(765) 00:13:27.934 fused_ordering(766) 00:13:27.934 fused_ordering(767) 00:13:27.934 fused_ordering(768) 00:13:27.934 fused_ordering(769) 00:13:27.934 fused_ordering(770) 00:13:27.934 fused_ordering(771) 00:13:27.934 fused_ordering(772) 00:13:27.934 fused_ordering(773) 00:13:27.934 fused_ordering(774) 00:13:27.934 fused_ordering(775) 00:13:27.934 fused_ordering(776) 00:13:27.934 fused_ordering(777) 00:13:27.934 fused_ordering(778) 00:13:27.934 fused_ordering(779) 00:13:27.934 fused_ordering(780) 00:13:27.934 fused_ordering(781) 00:13:27.934 fused_ordering(782) 00:13:27.934 fused_ordering(783) 00:13:27.934 fused_ordering(784) 00:13:27.934 fused_ordering(785) 00:13:27.934 fused_ordering(786) 00:13:27.934 fused_ordering(787) 00:13:27.934 fused_ordering(788) 00:13:27.934 fused_ordering(789) 00:13:27.934 fused_ordering(790) 00:13:27.934 fused_ordering(791) 00:13:27.934 fused_ordering(792) 00:13:27.934 fused_ordering(793) 00:13:27.934 fused_ordering(794) 00:13:27.934 fused_ordering(795) 00:13:27.934 fused_ordering(796) 00:13:27.934 fused_ordering(797) 00:13:27.934 fused_ordering(798) 00:13:27.934 fused_ordering(799) 00:13:27.934 fused_ordering(800) 00:13:27.934 fused_ordering(801) 00:13:27.934 fused_ordering(802) 00:13:27.934 fused_ordering(803) 00:13:27.934 fused_ordering(804) 00:13:27.934 fused_ordering(805) 00:13:27.934 fused_ordering(806) 00:13:27.934 fused_ordering(807) 00:13:27.934 fused_ordering(808) 00:13:27.934 fused_ordering(809) 00:13:27.934 fused_ordering(810) 00:13:27.934 fused_ordering(811) 00:13:27.934 fused_ordering(812) 00:13:27.934 fused_ordering(813) 00:13:27.934 fused_ordering(814) 00:13:27.934 fused_ordering(815) 00:13:27.934 fused_ordering(816) 00:13:27.934 fused_ordering(817) 00:13:27.934 fused_ordering(818) 00:13:27.934 fused_ordering(819) 00:13:27.934 fused_ordering(820) 00:13:28.501 fused_ordering(821) 00:13:28.501 fused_ordering(822) 00:13:28.501 fused_ordering(823) 00:13:28.501 fused_ordering(824) 00:13:28.501 fused_ordering(825) 00:13:28.501 fused_ordering(826) 00:13:28.501 fused_ordering(827) 00:13:28.501 fused_ordering(828) 00:13:28.501 fused_ordering(829) 00:13:28.501 fused_ordering(830) 00:13:28.501 fused_ordering(831) 00:13:28.501 fused_ordering(832) 00:13:28.501 fused_ordering(833) 00:13:28.501 fused_ordering(834) 00:13:28.501 fused_ordering(835) 00:13:28.501 fused_ordering(836) 00:13:28.501 fused_ordering(837) 00:13:28.501 fused_ordering(838) 00:13:28.501 fused_ordering(839) 00:13:28.501 fused_ordering(840) 00:13:28.501 fused_ordering(841) 00:13:28.501 fused_ordering(842) 00:13:28.501 fused_ordering(843) 00:13:28.501 fused_ordering(844) 00:13:28.501 fused_ordering(845) 00:13:28.501 fused_ordering(846) 00:13:28.501 fused_ordering(847) 00:13:28.501 fused_ordering(848) 00:13:28.501 fused_ordering(849) 00:13:28.501 fused_ordering(850) 00:13:28.501 fused_ordering(851) 00:13:28.501 fused_ordering(852) 00:13:28.501 fused_ordering(853) 00:13:28.501 fused_ordering(854) 00:13:28.501 fused_ordering(855) 00:13:28.501 fused_ordering(856) 00:13:28.501 fused_ordering(857) 00:13:28.501 fused_ordering(858) 00:13:28.501 fused_ordering(859) 00:13:28.501 fused_ordering(860) 00:13:28.501 fused_ordering(861) 00:13:28.501 fused_ordering(862) 00:13:28.501 fused_ordering(863) 00:13:28.501 fused_ordering(864) 00:13:28.501 fused_ordering(865) 00:13:28.501 fused_ordering(866) 00:13:28.501 fused_ordering(867) 00:13:28.501 fused_ordering(868) 00:13:28.501 fused_ordering(869) 00:13:28.501 fused_ordering(870) 00:13:28.501 fused_ordering(871) 00:13:28.501 fused_ordering(872) 00:13:28.501 fused_ordering(873) 00:13:28.501 fused_ordering(874) 00:13:28.501 fused_ordering(875) 00:13:28.501 fused_ordering(876) 00:13:28.501 fused_ordering(877) 00:13:28.501 fused_ordering(878) 00:13:28.501 fused_ordering(879) 00:13:28.501 fused_ordering(880) 00:13:28.501 fused_ordering(881) 00:13:28.501 fused_ordering(882) 00:13:28.501 fused_ordering(883) 00:13:28.501 fused_ordering(884) 00:13:28.501 fused_ordering(885) 00:13:28.501 fused_ordering(886) 00:13:28.501 fused_ordering(887) 00:13:28.501 fused_ordering(888) 00:13:28.501 fused_ordering(889) 00:13:28.501 fused_ordering(890) 00:13:28.501 fused_ordering(891) 00:13:28.501 fused_ordering(892) 00:13:28.501 fused_ordering(893) 00:13:28.501 fused_ordering(894) 00:13:28.501 fused_ordering(895) 00:13:28.501 fused_ordering(896) 00:13:28.501 fused_ordering(897) 00:13:28.501 fused_ordering(898) 00:13:28.501 fused_ordering(899) 00:13:28.501 fused_ordering(900) 00:13:28.501 fused_ordering(901) 00:13:28.501 fused_ordering(902) 00:13:28.501 fused_ordering(903) 00:13:28.501 fused_ordering(904) 00:13:28.501 fused_ordering(905) 00:13:28.501 fused_ordering(906) 00:13:28.501 fused_ordering(907) 00:13:28.501 fused_ordering(908) 00:13:28.501 fused_ordering(909) 00:13:28.501 fused_ordering(910) 00:13:28.501 fused_ordering(911) 00:13:28.501 fused_ordering(912) 00:13:28.501 fused_ordering(913) 00:13:28.501 fused_ordering(914) 00:13:28.501 fused_ordering(915) 00:13:28.501 fused_ordering(916) 00:13:28.501 fused_ordering(917) 00:13:28.501 fused_ordering(918) 00:13:28.501 fused_ordering(919) 00:13:28.501 fused_ordering(920) 00:13:28.501 fused_ordering(921) 00:13:28.501 fused_ordering(922) 00:13:28.501 fused_ordering(923) 00:13:28.501 fused_ordering(924) 00:13:28.501 fused_ordering(925) 00:13:28.501 fused_ordering(926) 00:13:28.501 fused_ordering(927) 00:13:28.501 fused_ordering(928) 00:13:28.501 fused_ordering(929) 00:13:28.501 fused_ordering(930) 00:13:28.501 fused_ordering(931) 00:13:28.501 fused_ordering(932) 00:13:28.501 fused_ordering(933) 00:13:28.501 fused_ordering(934) 00:13:28.501 fused_ordering(935) 00:13:28.501 fused_ordering(936) 00:13:28.501 fused_ordering(937) 00:13:28.501 fused_ordering(938) 00:13:28.501 fused_ordering(939) 00:13:28.501 fused_ordering(940) 00:13:28.501 fused_ordering(941) 00:13:28.501 fused_ordering(942) 00:13:28.501 fused_ordering(943) 00:13:28.501 fused_ordering(944) 00:13:28.501 fused_ordering(945) 00:13:28.501 fused_ordering(946) 00:13:28.501 fused_ordering(947) 00:13:28.501 fused_ordering(948) 00:13:28.501 fused_ordering(949) 00:13:28.501 fused_ordering(950) 00:13:28.501 fused_ordering(951) 00:13:28.501 fused_ordering(952) 00:13:28.501 fused_ordering(953) 00:13:28.501 fused_ordering(954) 00:13:28.501 fused_ordering(955) 00:13:28.501 fused_ordering(956) 00:13:28.501 fused_ordering(957) 00:13:28.501 fused_ordering(958) 00:13:28.501 fused_ordering(959) 00:13:28.501 fused_ordering(960) 00:13:28.501 fused_ordering(961) 00:13:28.501 fused_ordering(962) 00:13:28.501 fused_ordering(963) 00:13:28.501 fused_ordering(964) 00:13:28.501 fused_ordering(965) 00:13:28.501 fused_ordering(966) 00:13:28.501 fused_ordering(967) 00:13:28.501 fused_ordering(968) 00:13:28.501 fused_ordering(969) 00:13:28.501 fused_ordering(970) 00:13:28.501 fused_ordering(971) 00:13:28.501 fused_ordering(972) 00:13:28.501 fused_ordering(973) 00:13:28.501 fused_ordering(974) 00:13:28.501 fused_ordering(975) 00:13:28.501 fused_ordering(976) 00:13:28.501 fused_ordering(977) 00:13:28.501 fused_ordering(978) 00:13:28.501 fused_ordering(979) 00:13:28.501 fused_ordering(980) 00:13:28.501 fused_ordering(981) 00:13:28.501 fused_ordering(982) 00:13:28.501 fused_ordering(983) 00:13:28.501 fused_ordering(984) 00:13:28.501 fused_ordering(985) 00:13:28.501 fused_ordering(986) 00:13:28.501 fused_ordering(987) 00:13:28.501 fused_ordering(988) 00:13:28.501 fused_ordering(989) 00:13:28.501 fused_ordering(990) 00:13:28.501 fused_ordering(991) 00:13:28.501 fused_ordering(992) 00:13:28.501 fused_ordering(993) 00:13:28.501 fused_ordering(994) 00:13:28.501 fused_ordering(995) 00:13:28.501 fused_ordering(996) 00:13:28.501 fused_ordering(997) 00:13:28.501 fused_ordering(998) 00:13:28.501 fused_ordering(999) 00:13:28.501 fused_ordering(1000) 00:13:28.501 fused_ordering(1001) 00:13:28.501 fused_ordering(1002) 00:13:28.501 fused_ordering(1003) 00:13:28.501 fused_ordering(1004) 00:13:28.501 fused_ordering(1005) 00:13:28.501 fused_ordering(1006) 00:13:28.501 fused_ordering(1007) 00:13:28.501 fused_ordering(1008) 00:13:28.501 fused_ordering(1009) 00:13:28.501 fused_ordering(1010) 00:13:28.501 fused_ordering(1011) 00:13:28.501 fused_ordering(1012) 00:13:28.501 fused_ordering(1013) 00:13:28.501 fused_ordering(1014) 00:13:28.501 fused_ordering(1015) 00:13:28.501 fused_ordering(1016) 00:13:28.501 fused_ordering(1017) 00:13:28.501 fused_ordering(1018) 00:13:28.501 fused_ordering(1019) 00:13:28.501 fused_ordering(1020) 00:13:28.501 fused_ordering(1021) 00:13:28.501 fused_ordering(1022) 00:13:28.501 fused_ordering(1023) 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.501 rmmod nvme_tcp 00:13:28.501 rmmod nvme_fabrics 00:13:28.501 rmmod nvme_keyring 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 492561 ']' 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 492561 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 492561 ']' 00:13:28.501 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 492561 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492561 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492561' 00:13:28.502 killing process with pid 492561 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 492561 00:13:28.502 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 492561 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.068 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.973 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.973 00:13:30.973 real 0m8.135s 00:13:30.973 user 0m5.961s 00:13:30.973 sys 0m3.090s 00:13:30.973 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.973 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.973 ************************************ 00:13:30.973 END TEST nvmf_fused_ordering 00:13:30.973 ************************************ 00:13:30.973 09:28:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.973 09:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.973 09:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.974 ************************************ 00:13:30.974 START TEST nvmf_ns_masking 00:13:30.974 ************************************ 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.974 * Looking for test storage... 00:13:30.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d38a0dc9-8733-41b3-82d9-f00981c9082a 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5f5fbd12-7c0d-4bad-a2c6-dbe28c572baf 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7405ae4c-a48d-4b75-a4ef-2ff133936fb3 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.974 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:33.507 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.507 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:33.508 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:33.508 Found net devices under 0000:82:00.0: cvl_0_0 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:33.508 Found net devices under 0000:82:00.1: cvl_0_1 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:13:33.508 00:13:33.508 --- 10.0.0.2 ping statistics --- 00:13:33.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.508 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:13:33.508 00:13:33.508 --- 10.0.0.1 ping statistics --- 00:13:33.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.508 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=494929 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 494929 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 494929 ']' 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.508 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.508 [2024-07-25 09:28:05.932086] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:33.508 [2024-07-25 09:28:05.932174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.508 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.508 [2024-07-25 09:28:05.995892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.508 [2024-07-25 09:28:06.101190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.508 [2024-07-25 09:28:06.101244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.508 [2024-07-25 09:28:06.101273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.508 [2024-07-25 09:28:06.101285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.508 [2024-07-25 09:28:06.101294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.508 [2024-07-25 09:28:06.101320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.508 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:34.074 [2024-07-25 09:28:06.510651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.074 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:34.074 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:34.074 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:34.074 Malloc1 00:13:34.331 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:34.692 Malloc2 00:13:34.692 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.692 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:34.950 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.206 [2024-07-25 09:28:07.839605] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.206 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:35.206 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7405ae4c-a48d-4b75-a4ef-2ff133936fb3 -a 10.0.0.2 -s 4420 -i 4 00:13:35.464 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.464 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:13:35.464 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.464 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:35.464 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:37.362 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.362 [ 0]:0x1 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ae24069e00b34f4a9c297283fa07977b 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ae24069e00b34f4a9c297283fa07977b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.362 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:37.621 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:37.621 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.621 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.621 [ 0]:0x1 00:13:37.621 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ae24069e00b34f4a9c297283fa07977b 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ae24069e00b34f4a9c297283fa07977b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.878 [ 1]:0x2 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:37.878 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.136 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.393 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:38.651 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:38.651 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7405ae4c-a48d-4b75-a4ef-2ff133936fb3 -a 10.0.0.2 -s 4420 -i 4 00:13:38.909 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:38.909 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:13:38.909 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.909 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:13:38.909 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:13:38.909 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:13:40.806 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:40.807 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.064 [ 0]:0x2 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.064 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.322 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:41.322 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.322 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.322 [ 0]:0x1 00:13:41.579 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.579 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.579 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ae24069e00b34f4a9c297283fa07977b 00:13:41.579 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ae24069e00b34f4a9c297283fa07977b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.579 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.580 [ 1]:0x2 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.580 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.838 [ 0]:0x2 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:41.838 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.096 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7405ae4c-a48d-4b75-a4ef-2ff133936fb3 -a 10.0.0.2 -s 4420 -i 4 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:13:42.354 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:13:44.251 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:44.252 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.509 [ 0]:0x1 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ae24069e00b34f4a9c297283fa07977b 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ae24069e00b34f4a9c297283fa07977b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:44.509 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.510 [ 1]:0x2 00:13:44.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:44.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.768 [ 0]:0x2 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:44.768 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.025 [2024-07-25 09:28:17.706005] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:45.025 request: 00:13:45.025 { 00:13:45.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.025 "nsid": 2, 00:13:45.025 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.025 "method": "nvmf_ns_remove_host", 00:13:45.025 "req_id": 1 00:13:45.025 } 00:13:45.025 Got JSON-RPC error response 00:13:45.025 response: 00:13:45.025 { 00:13:45.025 "code": -32602, 00:13:45.025 "message": "Invalid parameters" 00:13:45.025 } 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.025 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:45.026 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.026 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.026 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.026 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.284 [ 0]:0x2 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3f3ef3bc1584af5a63821ae625e49bc 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3f3ef3bc1584af5a63821ae625e49bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=496420 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 496420 /var/tmp/host.sock 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 496420 ']' 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:45.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.284 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.284 [2024-07-25 09:28:17.905688] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:45.284 [2024-07-25 09:28:17.905765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496420 ] 00:13:45.284 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.284 [2024-07-25 09:28:17.965506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.542 [2024-07-25 09:28:18.081158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.800 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.800 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:45.800 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.058 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.316 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d38a0dc9-8733-41b3-82d9-f00981c9082a 00:13:46.316 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:46.316 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D38A0DC9873341B382D9F00981C9082A -i 00:13:46.573 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5f5fbd12-7c0d-4bad-a2c6-dbe28c572baf 00:13:46.573 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:46.573 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5F5FBD127C0D4BADA2C6DBE28C572BAF -i 00:13:46.830 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:47.088 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:47.346 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:47.346 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:47.910 nvme0n1 00:13:47.910 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:47.910 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:48.169 nvme1n2 00:13:48.169 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:48.169 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:48.169 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:48.169 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:48.169 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:48.427 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:48.427 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:48.427 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:48.427 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:48.427 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d38a0dc9-8733-41b3-82d9-f00981c9082a == \d\3\8\a\0\d\c\9\-\8\7\3\3\-\4\1\b\3\-\8\2\d\9\-\f\0\0\9\8\1\c\9\0\8\2\a ]] 00:13:48.427 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:48.427 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:48.427 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5f5fbd12-7c0d-4bad-a2c6-dbe28c572baf == \5\f\5\f\b\d\1\2\-\7\c\0\d\-\4\b\a\d\-\a\2\c\6\-\d\b\e\2\8\c\5\7\2\b\a\f ]] 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 496420 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 496420 ']' 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 496420 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.684 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 496420 00:13:48.942 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:48.942 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:48.942 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 496420' 00:13:48.942 killing process with pid 496420 00:13:48.942 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 496420 00:13:48.942 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 496420 00:13:49.200 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.457 rmmod nvme_tcp 00:13:49.457 rmmod nvme_fabrics 00:13:49.457 rmmod nvme_keyring 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 494929 ']' 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 494929 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 494929 ']' 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 494929 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.457 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 494929 00:13:49.715 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.715 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.715 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 494929' 00:13:49.715 killing process with pid 494929 00:13:49.715 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 494929 00:13:49.715 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 494929 00:13:49.973 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.973 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.974 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.974 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.974 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.974 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.974 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.974 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.873 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.873 00:13:51.873 real 0m20.985s 00:13:51.873 user 0m27.329s 00:13:51.873 sys 0m4.048s 00:13:51.873 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.873 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.873 ************************************ 00:13:51.873 END TEST nvmf_ns_masking 00:13:51.873 ************************************ 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.131 ************************************ 00:13:52.131 START TEST nvmf_nvme_cli 00:13:52.131 ************************************ 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:52.131 * Looking for test storage... 00:13:52.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.131 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:52.132 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:54.031 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:54.031 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:54.031 Found net devices under 0000:82:00.0: cvl_0_0 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:54.031 Found net devices under 0000:82:00.1: cvl_0_1 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.031 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:13:54.032 00:13:54.032 --- 10.0.0.2 ping statistics --- 00:13:54.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.032 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:13:54.032 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:13:54.289 00:13:54.289 --- 10.0.0.1 ping statistics --- 00:13:54.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.289 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.289 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=498906 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 498906 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 498906 ']' 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.290 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.290 [2024-07-25 09:28:26.842389] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:54.290 [2024-07-25 09:28:26.842486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.290 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.290 [2024-07-25 09:28:26.911240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.547 [2024-07-25 09:28:27.034045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.547 [2024-07-25 09:28:27.034101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.547 [2024-07-25 09:28:27.034117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.547 [2024-07-25 09:28:27.034131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.547 [2024-07-25 09:28:27.034144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.547 [2024-07-25 09:28:27.034232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.547 [2024-07-25 09:28:27.034287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.547 [2024-07-25 09:28:27.034339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.547 [2024-07-25 09:28:27.034342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.112 [2024-07-25 09:28:27.798864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.112 Malloc0 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 Malloc1 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 [2024-07-25 09:28:27.880513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:13:55.370 00:13:55.370 Discovery Log Number of Records 2, Generation counter 2 00:13:55.370 =====Discovery Log Entry 0====== 00:13:55.370 trtype: tcp 00:13:55.370 adrfam: ipv4 00:13:55.370 subtype: current discovery subsystem 00:13:55.370 treq: not required 00:13:55.370 portid: 0 00:13:55.370 trsvcid: 4420 00:13:55.370 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:55.370 traddr: 10.0.0.2 00:13:55.370 eflags: explicit discovery connections, duplicate discovery information 00:13:55.370 sectype: none 00:13:55.370 =====Discovery Log Entry 1====== 00:13:55.370 trtype: tcp 00:13:55.370 adrfam: ipv4 00:13:55.370 subtype: nvme subsystem 00:13:55.370 treq: not required 00:13:55.370 portid: 0 00:13:55.370 trsvcid: 4420 00:13:55.370 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:55.370 traddr: 10.0.0.2 00:13:55.370 eflags: none 00:13:55.370 sectype: none 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:55.370 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.302 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:56.302 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:13:56.302 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.302 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:13:56.302 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:13:56.302 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:58.199 /dev/nvme0n1 ]] 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.199 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:58.457 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.715 rmmod nvme_tcp 00:13:58.715 rmmod nvme_fabrics 00:13:58.715 rmmod nvme_keyring 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 498906 ']' 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 498906 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 498906 ']' 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 498906 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 498906 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 498906' 00:13:58.715 killing process with pid 498906 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 498906 00:13:58.715 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 498906 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.280 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:01.183 00:14:01.183 real 0m9.115s 00:14:01.183 user 0m19.110s 00:14:01.183 sys 0m2.216s 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.183 ************************************ 00:14:01.183 END TEST nvmf_nvme_cli 00:14:01.183 ************************************ 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.183 ************************************ 00:14:01.183 START TEST nvmf_vfio_user 00:14:01.183 ************************************ 00:14:01.183 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:01.183 * Looking for test storage... 00:14:01.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=499840 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 499840' 00:14:01.184 Process pid: 499840 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 499840 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 499840 ']' 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.184 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 [2024-07-25 09:28:33.909415] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:14:01.184 [2024-07-25 09:28:33.909499] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.442 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.442 [2024-07-25 09:28:33.974545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.442 [2024-07-25 09:28:34.090228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.442 [2024-07-25 09:28:34.090281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.442 [2024-07-25 09:28:34.090294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.442 [2024-07-25 09:28:34.090305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.442 [2024-07-25 09:28:34.090315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.442 [2024-07-25 09:28:34.090443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.442 [2024-07-25 09:28:34.090509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.442 [2024-07-25 09:28:34.090574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.442 [2024-07-25 09:28:34.090577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.699 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.699 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:01.699 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:02.633 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:02.890 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:02.890 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:02.890 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.890 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:02.890 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:03.148 Malloc1 00:14:03.148 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:03.405 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:03.663 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:03.922 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:03.922 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:03.922 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:04.180 Malloc2 00:14:04.180 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:04.437 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:04.694 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:04.953 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:04.953 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:04.953 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.953 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:04.953 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:04.953 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:04.953 [2024-07-25 09:28:37.510301] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:14:04.953 [2024-07-25 09:28:37.510368] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500378 ] 00:14:04.953 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.953 [2024-07-25 09:28:37.545765] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:04.953 [2024-07-25 09:28:37.553801] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.953 [2024-07-25 09:28:37.553829] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9c205d0000 00:14:04.953 [2024-07-25 09:28:37.554804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.555798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.556802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.557810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.558815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.559818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.560822] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.561830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.953 [2024-07-25 09:28:37.562840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.953 [2024-07-25 09:28:37.562861] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9c205c5000 00:14:04.953 [2024-07-25 09:28:37.563976] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.953 [2024-07-25 09:28:37.580000] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:04.953 [2024-07-25 09:28:37.580044] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:04.953 [2024-07-25 09:28:37.584959] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:04.953 [2024-07-25 09:28:37.585013] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:04.953 [2024-07-25 09:28:37.585103] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:04.953 [2024-07-25 09:28:37.585129] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:04.953 [2024-07-25 09:28:37.585138] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:04.953 [2024-07-25 09:28:37.585947] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:04.953 [2024-07-25 09:28:37.585971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:04.953 [2024-07-25 09:28:37.585985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:04.953 [2024-07-25 09:28:37.586950] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:04.953 [2024-07-25 09:28:37.586968] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:04.953 [2024-07-25 09:28:37.586981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.953 [2024-07-25 09:28:37.587952] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:04.953 [2024-07-25 09:28:37.587970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.954 [2024-07-25 09:28:37.588958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:04.954 [2024-07-25 09:28:37.588977] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:04.954 [2024-07-25 09:28:37.588986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:04.954 [2024-07-25 09:28:37.588997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.954 [2024-07-25 09:28:37.589106] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:04.954 [2024-07-25 09:28:37.589114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.954 [2024-07-25 09:28:37.589123] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:04.954 [2024-07-25 09:28:37.589965] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:04.954 [2024-07-25 09:28:37.590968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:04.954 [2024-07-25 09:28:37.591977] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:04.954 [2024-07-25 09:28:37.592975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.954 [2024-07-25 09:28:37.593103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.954 [2024-07-25 09:28:37.593993] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:04.954 [2024-07-25 09:28:37.594011] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.954 [2024-07-25 09:28:37.594020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594044] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:04.954 [2024-07-25 09:28:37.594061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594086] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.954 [2024-07-25 09:28:37.594096] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.954 [2024-07-25 09:28:37.594102] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.954 [2024-07-25 09:28:37.594120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594198] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:04.954 [2024-07-25 09:28:37.594206] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:04.954 [2024-07-25 09:28:37.594213] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:04.954 [2024-07-25 09:28:37.594220] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:04.954 [2024-07-25 09:28:37.594228] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:04.954 [2024-07-25 09:28:37.594235] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:04.954 [2024-07-25 09:28:37.594242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.954 [2024-07-25 09:28:37.594324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.954 [2024-07-25 09:28:37.594350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.954 [2024-07-25 09:28:37.594372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.954 [2024-07-25 09:28:37.594382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594443] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:04.954 [2024-07-25 09:28:37.594452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594601] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:04.954 [2024-07-25 09:28:37.594609] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:04.954 [2024-07-25 09:28:37.594615] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.954 [2024-07-25 09:28:37.594625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594670] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:04.954 [2024-07-25 09:28:37.594686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594712] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.954 [2024-07-25 09:28:37.594719] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.954 [2024-07-25 09:28:37.594725] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.954 [2024-07-25 09:28:37.594734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594808] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.954 [2024-07-25 09:28:37.594816] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.954 [2024-07-25 09:28:37.594822] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.954 [2024-07-25 09:28:37.594831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.954 [2024-07-25 09:28:37.594844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:04.954 [2024-07-25 09:28:37.594858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594919] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:04.954 [2024-07-25 09:28:37.594926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:04.954 [2024-07-25 09:28:37.594934] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:04.954 [2024-07-25 09:28:37.594960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.594978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.594996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.595008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.595023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.595035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.595050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.595062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.595083] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:04.955 [2024-07-25 09:28:37.595092] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:04.955 [2024-07-25 09:28:37.595098] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:04.955 [2024-07-25 09:28:37.595104] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:04.955 [2024-07-25 09:28:37.595113] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:04.955 [2024-07-25 09:28:37.595122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:04.955 [2024-07-25 09:28:37.595134] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:04.955 [2024-07-25 09:28:37.595141] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:04.955 [2024-07-25 09:28:37.595147] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.955 [2024-07-25 09:28:37.595156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.595166] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:04.955 [2024-07-25 09:28:37.595174] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.955 [2024-07-25 09:28:37.595180] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.955 [2024-07-25 09:28:37.595188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.595200] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:04.955 [2024-07-25 09:28:37.595208] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:04.955 [2024-07-25 09:28:37.595213] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.955 [2024-07-25 09:28:37.595222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:04.955 [2024-07-25 09:28:37.595233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.595252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.595269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:04.955 [2024-07-25 09:28:37.595281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:04.955 ===================================================== 00:14:04.955 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:04.955 ===================================================== 00:14:04.955 Controller Capabilities/Features 00:14:04.955 ================================ 00:14:04.955 Vendor ID: 4e58 00:14:04.955 Subsystem Vendor ID: 4e58 00:14:04.955 Serial Number: SPDK1 00:14:04.955 Model Number: SPDK bdev Controller 00:14:04.955 Firmware Version: 24.09 00:14:04.955 Recommended Arb Burst: 6 00:14:04.955 IEEE OUI Identifier: 8d 6b 50 00:14:04.955 Multi-path I/O 00:14:04.955 May have multiple subsystem ports: Yes 00:14:04.955 May have multiple controllers: Yes 00:14:04.955 Associated with SR-IOV VF: No 00:14:04.955 Max Data Transfer Size: 131072 00:14:04.955 Max Number of Namespaces: 32 00:14:04.955 Max Number of I/O Queues: 127 00:14:04.955 NVMe Specification Version (VS): 1.3 00:14:04.955 NVMe Specification Version (Identify): 1.3 00:14:04.955 Maximum Queue Entries: 256 00:14:04.955 Contiguous Queues Required: Yes 00:14:04.955 Arbitration Mechanisms Supported 00:14:04.955 Weighted Round Robin: Not Supported 00:14:04.955 Vendor Specific: Not Supported 00:14:04.955 Reset Timeout: 15000 ms 00:14:04.955 Doorbell Stride: 4 bytes 00:14:04.955 NVM Subsystem Reset: Not Supported 00:14:04.955 Command Sets Supported 00:14:04.955 NVM Command Set: Supported 00:14:04.955 Boot Partition: Not Supported 00:14:04.955 Memory Page Size Minimum: 4096 bytes 00:14:04.955 Memory Page Size Maximum: 4096 bytes 00:14:04.955 Persistent Memory Region: Not Supported 00:14:04.955 Optional Asynchronous Events Supported 00:14:04.955 Namespace Attribute Notices: Supported 00:14:04.955 Firmware Activation Notices: Not Supported 00:14:04.955 ANA Change Notices: Not Supported 00:14:04.955 PLE Aggregate Log Change Notices: Not Supported 00:14:04.955 LBA Status Info Alert Notices: Not Supported 00:14:04.955 EGE Aggregate Log Change Notices: Not Supported 00:14:04.955 Normal NVM Subsystem Shutdown event: Not Supported 00:14:04.955 Zone Descriptor Change Notices: Not Supported 00:14:04.955 Discovery Log Change Notices: Not Supported 00:14:04.955 Controller Attributes 00:14:04.955 128-bit Host Identifier: Supported 00:14:04.955 Non-Operational Permissive Mode: Not Supported 00:14:04.955 NVM Sets: Not Supported 00:14:04.955 Read Recovery Levels: Not Supported 00:14:04.955 Endurance Groups: Not Supported 00:14:04.955 Predictable Latency Mode: Not Supported 00:14:04.955 Traffic Based Keep ALive: Not Supported 00:14:04.955 Namespace Granularity: Not Supported 00:14:04.955 SQ Associations: Not Supported 00:14:04.955 UUID List: Not Supported 00:14:04.955 Multi-Domain Subsystem: Not Supported 00:14:04.955 Fixed Capacity Management: Not Supported 00:14:04.955 Variable Capacity Management: Not Supported 00:14:04.955 Delete Endurance Group: Not Supported 00:14:04.955 Delete NVM Set: Not Supported 00:14:04.955 Extended LBA Formats Supported: Not Supported 00:14:04.955 Flexible Data Placement Supported: Not Supported 00:14:04.955 00:14:04.955 Controller Memory Buffer Support 00:14:04.955 ================================ 00:14:04.955 Supported: No 00:14:04.955 00:14:04.955 Persistent Memory Region Support 00:14:04.955 ================================ 00:14:04.955 Supported: No 00:14:04.955 00:14:04.955 Admin Command Set Attributes 00:14:04.955 ============================ 00:14:04.955 Security Send/Receive: Not Supported 00:14:04.955 Format NVM: Not Supported 00:14:04.955 Firmware Activate/Download: Not Supported 00:14:04.955 Namespace Management: Not Supported 00:14:04.955 Device Self-Test: Not Supported 00:14:04.955 Directives: Not Supported 00:14:04.955 NVMe-MI: Not Supported 00:14:04.955 Virtualization Management: Not Supported 00:14:04.955 Doorbell Buffer Config: Not Supported 00:14:04.955 Get LBA Status Capability: Not Supported 00:14:04.955 Command & Feature Lockdown Capability: Not Supported 00:14:04.955 Abort Command Limit: 4 00:14:04.955 Async Event Request Limit: 4 00:14:04.955 Number of Firmware Slots: N/A 00:14:04.955 Firmware Slot 1 Read-Only: N/A 00:14:04.955 Firmware Activation Without Reset: N/A 00:14:04.955 Multiple Update Detection Support: N/A 00:14:04.955 Firmware Update Granularity: No Information Provided 00:14:04.955 Per-Namespace SMART Log: No 00:14:04.955 Asymmetric Namespace Access Log Page: Not Supported 00:14:04.955 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:04.955 Command Effects Log Page: Supported 00:14:04.955 Get Log Page Extended Data: Supported 00:14:04.955 Telemetry Log Pages: Not Supported 00:14:04.955 Persistent Event Log Pages: Not Supported 00:14:04.955 Supported Log Pages Log Page: May Support 00:14:04.955 Commands Supported & Effects Log Page: Not Supported 00:14:04.955 Feature Identifiers & Effects Log Page:May Support 00:14:04.955 NVMe-MI Commands & Effects Log Page: May Support 00:14:04.955 Data Area 4 for Telemetry Log: Not Supported 00:14:04.955 Error Log Page Entries Supported: 128 00:14:04.955 Keep Alive: Supported 00:14:04.955 Keep Alive Granularity: 10000 ms 00:14:04.955 00:14:04.955 NVM Command Set Attributes 00:14:04.955 ========================== 00:14:04.955 Submission Queue Entry Size 00:14:04.955 Max: 64 00:14:04.955 Min: 64 00:14:04.955 Completion Queue Entry Size 00:14:04.955 Max: 16 00:14:04.955 Min: 16 00:14:04.955 Number of Namespaces: 32 00:14:04.955 Compare Command: Supported 00:14:04.955 Write Uncorrectable Command: Not Supported 00:14:04.955 Dataset Management Command: Supported 00:14:04.955 Write Zeroes Command: Supported 00:14:04.955 Set Features Save Field: Not Supported 00:14:04.955 Reservations: Not Supported 00:14:04.955 Timestamp: Not Supported 00:14:04.956 Copy: Supported 00:14:04.956 Volatile Write Cache: Present 00:14:04.956 Atomic Write Unit (Normal): 1 00:14:04.956 Atomic Write Unit (PFail): 1 00:14:04.956 Atomic Compare & Write Unit: 1 00:14:04.956 Fused Compare & Write: Supported 00:14:04.956 Scatter-Gather List 00:14:04.956 SGL Command Set: Supported (Dword aligned) 00:14:04.956 SGL Keyed: Not Supported 00:14:04.956 SGL Bit Bucket Descriptor: Not Supported 00:14:04.956 SGL Metadata Pointer: Not Supported 00:14:04.956 Oversized SGL: Not Supported 00:14:04.956 SGL Metadata Address: Not Supported 00:14:04.956 SGL Offset: Not Supported 00:14:04.956 Transport SGL Data Block: Not Supported 00:14:04.956 Replay Protected Memory Block: Not Supported 00:14:04.956 00:14:04.956 Firmware Slot Information 00:14:04.956 ========================= 00:14:04.956 Active slot: 1 00:14:04.956 Slot 1 Firmware Revision: 24.09 00:14:04.956 00:14:04.956 00:14:04.956 Commands Supported and Effects 00:14:04.956 ============================== 00:14:04.956 Admin Commands 00:14:04.956 -------------- 00:14:04.956 Get Log Page (02h): Supported 00:14:04.956 Identify (06h): Supported 00:14:04.956 Abort (08h): Supported 00:14:04.956 Set Features (09h): Supported 00:14:04.956 Get Features (0Ah): Supported 00:14:04.956 Asynchronous Event Request (0Ch): Supported 00:14:04.956 Keep Alive (18h): Supported 00:14:04.956 I/O Commands 00:14:04.956 ------------ 00:14:04.956 Flush (00h): Supported LBA-Change 00:14:04.956 Write (01h): Supported LBA-Change 00:14:04.956 Read (02h): Supported 00:14:04.956 Compare (05h): Supported 00:14:04.956 Write Zeroes (08h): Supported LBA-Change 00:14:04.956 Dataset Management (09h): Supported LBA-Change 00:14:04.956 Copy (19h): Supported LBA-Change 00:14:04.956 00:14:04.956 Error Log 00:14:04.956 ========= 00:14:04.956 00:14:04.956 Arbitration 00:14:04.956 =========== 00:14:04.956 Arbitration Burst: 1 00:14:04.956 00:14:04.956 Power Management 00:14:04.956 ================ 00:14:04.956 Number of Power States: 1 00:14:04.956 Current Power State: Power State #0 00:14:04.956 Power State #0: 00:14:04.956 Max Power: 0.00 W 00:14:04.956 Non-Operational State: Operational 00:14:04.956 Entry Latency: Not Reported 00:14:04.956 Exit Latency: Not Reported 00:14:04.956 Relative Read Throughput: 0 00:14:04.956 Relative Read Latency: 0 00:14:04.956 Relative Write Throughput: 0 00:14:04.956 Relative Write Latency: 0 00:14:04.956 Idle Power: Not Reported 00:14:04.956 Active Power: Not Reported 00:14:04.956 Non-Operational Permissive Mode: Not Supported 00:14:04.956 00:14:04.956 Health Information 00:14:04.956 ================== 00:14:04.956 Critical Warnings: 00:14:04.956 Available Spare Space: OK 00:14:04.956 Temperature: OK 00:14:04.956 Device Reliability: OK 00:14:04.956 Read Only: No 00:14:04.956 Volatile Memory Backup: OK 00:14:04.956 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:04.956 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:04.956 Available Spare: 0% 00:14:04.956 Available Sp[2024-07-25 09:28:37.595436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:04.956 [2024-07-25 09:28:37.595453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:04.956 [2024-07-25 09:28:37.595498] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:04.956 [2024-07-25 09:28:37.595516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.956 [2024-07-25 09:28:37.595527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.956 [2024-07-25 09:28:37.595538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.956 [2024-07-25 09:28:37.595547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.956 [2024-07-25 09:28:37.599367] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:04.956 [2024-07-25 09:28:37.599389] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:04.956 [2024-07-25 09:28:37.600028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.956 [2024-07-25 09:28:37.600116] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:04.956 [2024-07-25 09:28:37.600129] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:04.956 [2024-07-25 09:28:37.601044] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:04.956 [2024-07-25 09:28:37.601067] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:04.956 [2024-07-25 09:28:37.601120] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:04.956 [2024-07-25 09:28:37.603085] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.956 are Threshold: 0% 00:14:04.956 Life Percentage Used: 0% 00:14:04.956 Data Units Read: 0 00:14:04.956 Data Units Written: 0 00:14:04.956 Host Read Commands: 0 00:14:04.956 Host Write Commands: 0 00:14:04.956 Controller Busy Time: 0 minutes 00:14:04.956 Power Cycles: 0 00:14:04.956 Power On Hours: 0 hours 00:14:04.956 Unsafe Shutdowns: 0 00:14:04.956 Unrecoverable Media Errors: 0 00:14:04.956 Lifetime Error Log Entries: 0 00:14:04.956 Warning Temperature Time: 0 minutes 00:14:04.956 Critical Temperature Time: 0 minutes 00:14:04.956 00:14:04.956 Number of Queues 00:14:04.956 ================ 00:14:04.956 Number of I/O Submission Queues: 127 00:14:04.956 Number of I/O Completion Queues: 127 00:14:04.956 00:14:04.956 Active Namespaces 00:14:04.956 ================= 00:14:04.956 Namespace ID:1 00:14:04.956 Error Recovery Timeout: Unlimited 00:14:04.956 Command Set Identifier: NVM (00h) 00:14:04.956 Deallocate: Supported 00:14:04.956 Deallocated/Unwritten Error: Not Supported 00:14:04.956 Deallocated Read Value: Unknown 00:14:04.956 Deallocate in Write Zeroes: Not Supported 00:14:04.956 Deallocated Guard Field: 0xFFFF 00:14:04.956 Flush: Supported 00:14:04.956 Reservation: Supported 00:14:04.956 Namespace Sharing Capabilities: Multiple Controllers 00:14:04.956 Size (in LBAs): 131072 (0GiB) 00:14:04.956 Capacity (in LBAs): 131072 (0GiB) 00:14:04.956 Utilization (in LBAs): 131072 (0GiB) 00:14:04.956 NGUID: 44F3A138EC394266BEBCDA035D441F8D 00:14:04.956 UUID: 44f3a138-ec39-4266-bebc-da035d441f8d 00:14:04.956 Thin Provisioning: Not Supported 00:14:04.956 Per-NS Atomic Units: Yes 00:14:04.956 Atomic Boundary Size (Normal): 0 00:14:04.956 Atomic Boundary Size (PFail): 0 00:14:04.956 Atomic Boundary Offset: 0 00:14:04.956 Maximum Single Source Range Length: 65535 00:14:04.956 Maximum Copy Length: 65535 00:14:04.956 Maximum Source Range Count: 1 00:14:04.956 NGUID/EUI64 Never Reused: No 00:14:04.956 Namespace Write Protected: No 00:14:04.956 Number of LBA Formats: 1 00:14:04.956 Current LBA Format: LBA Format #00 00:14:04.956 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:04.956 00:14:04.956 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:04.956 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.214 [2024-07-25 09:28:37.844294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.475 Initializing NVMe Controllers 00:14:10.475 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:10.475 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:10.475 Initialization complete. Launching workers. 00:14:10.475 ======================================================== 00:14:10.475 Latency(us) 00:14:10.475 Device Information : IOPS MiB/s Average min max 00:14:10.475 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33287.37 130.03 3846.68 1176.74 7623.35 00:14:10.475 ======================================================== 00:14:10.475 Total : 33287.37 130.03 3846.68 1176.74 7623.35 00:14:10.475 00:14:10.475 [2024-07-25 09:28:42.866156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.475 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:10.475 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.475 [2024-07-25 09:28:43.112411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:15.736 Initializing NVMe Controllers 00:14:15.736 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:15.736 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:15.736 Initialization complete. Launching workers. 00:14:15.737 ======================================================== 00:14:15.737 Latency(us) 00:14:15.737 Device Information : IOPS MiB/s Average min max 00:14:15.737 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.18 4960.74 15978.85 00:14:15.737 ======================================================== 00:14:15.737 Total : 16025.60 62.60 7997.18 4960.74 15978.85 00:14:15.737 00:14:15.737 [2024-07-25 09:28:48.149639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:15.737 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:15.737 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.737 [2024-07-25 09:28:48.351675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.039 [2024-07-25 09:28:53.408696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.039 Initializing NVMe Controllers 00:14:21.039 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.039 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:21.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:21.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:21.039 Initialization complete. Launching workers. 00:14:21.039 Starting thread on core 2 00:14:21.039 Starting thread on core 3 00:14:21.039 Starting thread on core 1 00:14:21.039 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:21.039 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.039 [2024-07-25 09:28:53.718853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.397 [2024-07-25 09:28:56.841964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.397 Initializing NVMe Controllers 00:14:24.397 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.397 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.397 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:24.397 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:24.397 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:24.397 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:24.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:24.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:24.397 Initialization complete. Launching workers. 00:14:24.397 Starting thread on core 1 with urgent priority queue 00:14:24.397 Starting thread on core 2 with urgent priority queue 00:14:24.397 Starting thread on core 3 with urgent priority queue 00:14:24.397 Starting thread on core 0 with urgent priority queue 00:14:24.397 SPDK bdev Controller (SPDK1 ) core 0: 2417.00 IO/s 41.37 secs/100000 ios 00:14:24.397 SPDK bdev Controller (SPDK1 ) core 1: 2492.00 IO/s 40.13 secs/100000 ios 00:14:24.397 SPDK bdev Controller (SPDK1 ) core 2: 2364.00 IO/s 42.30 secs/100000 ios 00:14:24.397 SPDK bdev Controller (SPDK1 ) core 3: 2508.00 IO/s 39.87 secs/100000 ios 00:14:24.397 ======================================================== 00:14:24.397 00:14:24.397 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:24.397 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.654 [2024-07-25 09:28:57.142943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.654 Initializing NVMe Controllers 00:14:24.654 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.654 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.654 Namespace ID: 1 size: 0GB 00:14:24.654 Initialization complete. 00:14:24.654 INFO: using host memory buffer for IO 00:14:24.654 Hello world! 00:14:24.654 [2024-07-25 09:28:57.176509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.654 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:24.654 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.910 [2024-07-25 09:28:57.468846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.842 Initializing NVMe Controllers 00:14:25.842 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:25.842 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:25.842 Initialization complete. Launching workers. 00:14:25.842 submit (in ns) avg, min, max = 9173.4, 3514.4, 4022913.3 00:14:25.842 complete (in ns) avg, min, max = 26768.4, 2066.7, 4015866.7 00:14:25.842 00:14:25.842 Submit histogram 00:14:25.842 ================ 00:14:25.842 Range in us Cumulative Count 00:14:25.842 3.508 - 3.532: 0.0231% ( 3) 00:14:25.842 3.532 - 3.556: 0.2152% ( 25) 00:14:25.842 3.556 - 3.579: 0.7607% ( 71) 00:14:25.842 3.579 - 3.603: 3.4271% ( 347) 00:14:25.842 3.603 - 3.627: 7.6840% ( 554) 00:14:25.842 3.627 - 3.650: 15.8368% ( 1061) 00:14:25.842 3.650 - 3.674: 24.4506% ( 1121) 00:14:25.842 3.674 - 3.698: 34.9854% ( 1371) 00:14:25.842 3.698 - 3.721: 43.9450% ( 1166) 00:14:25.842 3.721 - 3.745: 51.0373% ( 923) 00:14:25.842 3.745 - 3.769: 56.6313% ( 728) 00:14:25.842 3.769 - 3.793: 61.5030% ( 634) 00:14:25.842 3.793 - 3.816: 65.5141% ( 522) 00:14:25.842 3.816 - 3.840: 69.2331% ( 484) 00:14:25.842 3.840 - 3.864: 72.7524% ( 458) 00:14:25.842 3.864 - 3.887: 76.2563% ( 456) 00:14:25.842 3.887 - 3.911: 79.7449% ( 454) 00:14:25.842 3.911 - 3.935: 83.2488% ( 456) 00:14:25.842 3.935 - 3.959: 85.9920% ( 357) 00:14:25.842 3.959 - 3.982: 88.1359% ( 279) 00:14:25.842 3.982 - 4.006: 89.9109% ( 231) 00:14:25.842 4.006 - 4.030: 91.3631% ( 189) 00:14:25.842 4.030 - 4.053: 92.5004% ( 148) 00:14:25.842 4.053 - 4.077: 93.3379% ( 109) 00:14:25.842 4.077 - 4.101: 94.0295% ( 90) 00:14:25.842 4.101 - 4.124: 94.7288% ( 91) 00:14:25.842 4.124 - 4.148: 95.3435% ( 80) 00:14:25.842 4.148 - 4.172: 95.8353% ( 64) 00:14:25.842 4.172 - 4.196: 96.1042% ( 35) 00:14:25.842 4.196 - 4.219: 96.3962% ( 38) 00:14:25.842 4.219 - 4.243: 96.5883% ( 25) 00:14:25.842 4.243 - 4.267: 96.7189% ( 17) 00:14:25.842 4.267 - 4.290: 96.8034% ( 11) 00:14:25.842 4.290 - 4.314: 96.9033% ( 13) 00:14:25.842 4.314 - 4.338: 96.9494% ( 6) 00:14:25.842 4.338 - 4.361: 97.0801% ( 17) 00:14:25.842 4.361 - 4.385: 97.1646% ( 11) 00:14:25.842 4.385 - 4.409: 97.2184% ( 7) 00:14:25.842 4.409 - 4.433: 97.3106% ( 12) 00:14:25.842 4.433 - 4.456: 97.3644% ( 7) 00:14:25.842 4.456 - 4.480: 97.4105% ( 6) 00:14:25.842 4.480 - 4.504: 97.4489% ( 5) 00:14:25.842 4.504 - 4.527: 97.4720% ( 3) 00:14:25.842 4.527 - 4.551: 97.4796% ( 1) 00:14:25.842 4.551 - 4.575: 97.4950% ( 2) 00:14:25.842 4.575 - 4.599: 97.5027% ( 1) 00:14:25.842 4.599 - 4.622: 97.5104% ( 1) 00:14:25.842 4.646 - 4.670: 97.5411% ( 4) 00:14:25.842 4.670 - 4.693: 97.5565% ( 2) 00:14:25.842 4.693 - 4.717: 97.5718% ( 2) 00:14:25.842 4.717 - 4.741: 97.5949% ( 3) 00:14:25.842 4.741 - 4.764: 97.6179% ( 3) 00:14:25.842 4.764 - 4.788: 97.6564% ( 5) 00:14:25.842 4.788 - 4.812: 97.6717% ( 2) 00:14:25.842 4.812 - 4.836: 97.7178% ( 6) 00:14:25.842 4.836 - 4.859: 97.7332% ( 2) 00:14:25.842 4.859 - 4.883: 97.7716% ( 5) 00:14:25.842 4.883 - 4.907: 97.8485% ( 10) 00:14:25.842 4.907 - 4.930: 97.9099% ( 8) 00:14:25.842 4.930 - 4.954: 97.9560% ( 6) 00:14:25.842 4.954 - 4.978: 97.9945% ( 5) 00:14:25.842 4.978 - 5.001: 98.0175% ( 3) 00:14:25.842 5.001 - 5.025: 98.0559% ( 5) 00:14:25.842 5.025 - 5.049: 98.0713% ( 2) 00:14:25.842 5.049 - 5.073: 98.1405% ( 9) 00:14:25.842 5.096 - 5.120: 98.1789% ( 5) 00:14:25.842 5.120 - 5.144: 98.2019% ( 3) 00:14:25.842 5.144 - 5.167: 98.2096% ( 1) 00:14:25.842 5.167 - 5.191: 98.2480% ( 5) 00:14:25.842 5.191 - 5.215: 98.2557% ( 1) 00:14:25.842 5.215 - 5.239: 98.2634% ( 1) 00:14:25.842 5.239 - 5.262: 98.2788% ( 2) 00:14:25.842 5.262 - 5.286: 98.2865% ( 1) 00:14:25.842 5.286 - 5.310: 98.3095% ( 3) 00:14:25.842 5.310 - 5.333: 98.3172% ( 1) 00:14:25.842 5.333 - 5.357: 98.3249% ( 1) 00:14:25.842 5.357 - 5.381: 98.3402% ( 2) 00:14:25.842 5.404 - 5.428: 98.3479% ( 1) 00:14:25.842 5.452 - 5.476: 98.3556% ( 1) 00:14:25.842 5.476 - 5.499: 98.3633% ( 1) 00:14:25.842 5.499 - 5.523: 98.3787% ( 2) 00:14:25.842 5.547 - 5.570: 98.3940% ( 2) 00:14:25.842 5.594 - 5.618: 98.4094% ( 2) 00:14:25.842 5.641 - 5.665: 98.4171% ( 1) 00:14:25.842 5.713 - 5.736: 98.4248% ( 1) 00:14:25.842 5.879 - 5.902: 98.4325% ( 1) 00:14:25.842 6.044 - 6.068: 98.4401% ( 1) 00:14:25.842 6.068 - 6.116: 98.4478% ( 1) 00:14:25.842 6.305 - 6.353: 98.4555% ( 1) 00:14:25.843 6.400 - 6.447: 98.4632% ( 1) 00:14:25.843 6.542 - 6.590: 98.4709% ( 1) 00:14:25.843 6.590 - 6.637: 98.4786% ( 1) 00:14:25.843 6.684 - 6.732: 98.4862% ( 1) 00:14:25.843 6.779 - 6.827: 98.4939% ( 1) 00:14:25.843 7.016 - 7.064: 98.5016% ( 1) 00:14:25.843 7.111 - 7.159: 98.5093% ( 1) 00:14:25.843 7.206 - 7.253: 98.5170% ( 1) 00:14:25.843 7.301 - 7.348: 98.5247% ( 1) 00:14:25.843 7.396 - 7.443: 98.5400% ( 2) 00:14:25.843 7.443 - 7.490: 98.5477% ( 1) 00:14:25.843 7.490 - 7.538: 98.5554% ( 1) 00:14:25.843 7.538 - 7.585: 98.5708% ( 2) 00:14:25.843 7.585 - 7.633: 98.5785% ( 1) 00:14:25.843 7.680 - 7.727: 98.5938% ( 2) 00:14:25.843 7.727 - 7.775: 98.6015% ( 1) 00:14:25.843 7.775 - 7.822: 98.6092% ( 1) 00:14:25.843 7.822 - 7.870: 98.6246% ( 2) 00:14:25.843 7.870 - 7.917: 98.6322% ( 1) 00:14:25.843 7.917 - 7.964: 98.6399% ( 1) 00:14:25.843 8.059 - 8.107: 98.6476% ( 1) 00:14:25.843 8.107 - 8.154: 98.6707% ( 3) 00:14:25.843 8.201 - 8.249: 98.6783% ( 1) 00:14:25.843 8.296 - 8.344: 98.6860% ( 1) 00:14:25.843 8.344 - 8.391: 98.6937% ( 1) 00:14:25.843 8.391 - 8.439: 98.7014% ( 1) 00:14:25.843 8.486 - 8.533: 98.7091% ( 1) 00:14:25.843 8.628 - 8.676: 98.7168% ( 1) 00:14:25.843 8.770 - 8.818: 98.7245% ( 1) 00:14:25.843 8.818 - 8.865: 98.7321% ( 1) 00:14:25.843 8.865 - 8.913: 98.7398% ( 1) 00:14:25.843 8.960 - 9.007: 98.7475% ( 1) 00:14:25.843 9.244 - 9.292: 98.7629% ( 2) 00:14:25.843 9.292 - 9.339: 98.7706% ( 1) 00:14:25.843 9.339 - 9.387: 98.7782% ( 1) 00:14:25.843 9.481 - 9.529: 98.7859% ( 1) 00:14:25.843 9.529 - 9.576: 98.7936% ( 1) 00:14:25.843 9.624 - 9.671: 98.8013% ( 1) 00:14:25.843 9.908 - 9.956: 98.8090% ( 1) 00:14:25.843 10.098 - 10.145: 98.8167% ( 1) 00:14:25.843 10.287 - 10.335: 98.8243% ( 1) 00:14:25.843 10.335 - 10.382: 98.8320% ( 1) 00:14:25.843 10.430 - 10.477: 98.8474% ( 2) 00:14:25.843 10.524 - 10.572: 98.8551% ( 1) 00:14:25.843 10.856 - 10.904: 98.8628% ( 1) 00:14:25.843 10.904 - 10.951: 98.8704% ( 1) 00:14:25.843 11.236 - 11.283: 98.8781% ( 1) 00:14:25.843 11.378 - 11.425: 98.8858% ( 1) 00:14:25.843 11.520 - 11.567: 98.8935% ( 1) 00:14:25.843 11.567 - 11.615: 98.9012% ( 1) 00:14:25.843 11.710 - 11.757: 98.9089% ( 1) 00:14:25.843 11.757 - 11.804: 98.9242% ( 2) 00:14:25.843 11.994 - 12.041: 98.9319% ( 1) 00:14:25.843 12.136 - 12.231: 98.9396% ( 1) 00:14:25.843 12.231 - 12.326: 98.9550% ( 2) 00:14:25.843 12.421 - 12.516: 98.9627% ( 1) 00:14:25.843 12.516 - 12.610: 98.9703% ( 1) 00:14:25.843 12.800 - 12.895: 98.9780% ( 1) 00:14:25.843 12.990 - 13.084: 98.9857% ( 1) 00:14:25.843 13.274 - 13.369: 99.0011% ( 2) 00:14:25.843 13.369 - 13.464: 99.0088% ( 1) 00:14:25.843 13.843 - 13.938: 99.0164% ( 1) 00:14:25.843 13.938 - 14.033: 99.0241% ( 1) 00:14:25.843 14.127 - 14.222: 99.0318% ( 1) 00:14:25.843 14.222 - 14.317: 99.0472% ( 2) 00:14:25.843 14.412 - 14.507: 99.0549% ( 1) 00:14:25.843 14.601 - 14.696: 99.0702% ( 2) 00:14:25.843 14.696 - 14.791: 99.0856% ( 2) 00:14:25.843 15.265 - 15.360: 99.0933% ( 1) 00:14:25.843 15.834 - 15.929: 99.1010% ( 1) 00:14:25.843 17.067 - 17.161: 99.1087% ( 1) 00:14:25.843 17.161 - 17.256: 99.1163% ( 1) 00:14:25.843 17.256 - 17.351: 99.1240% ( 1) 00:14:25.843 17.541 - 17.636: 99.1624% ( 5) 00:14:25.843 17.636 - 17.730: 99.2009% ( 5) 00:14:25.843 17.730 - 17.825: 99.2470% ( 6) 00:14:25.843 17.825 - 17.920: 99.2931% ( 6) 00:14:25.843 17.920 - 18.015: 99.3469% ( 7) 00:14:25.843 18.015 - 18.110: 99.3853% ( 5) 00:14:25.843 18.110 - 18.204: 99.4314% ( 6) 00:14:25.843 18.204 - 18.299: 99.4852% ( 7) 00:14:25.843 18.299 - 18.394: 99.5620% ( 10) 00:14:25.843 18.394 - 18.489: 99.6081% ( 6) 00:14:25.843 18.584 - 18.679: 99.6235% ( 2) 00:14:25.843 18.679 - 18.773: 99.6773% ( 7) 00:14:25.843 18.773 - 18.868: 99.7157% ( 5) 00:14:25.843 18.868 - 18.963: 99.7234% ( 1) 00:14:25.843 18.963 - 19.058: 99.7464% ( 3) 00:14:25.843 19.058 - 19.153: 99.7772% ( 4) 00:14:25.843 19.153 - 19.247: 99.7925% ( 2) 00:14:25.843 19.247 - 19.342: 99.8002% ( 1) 00:14:25.843 19.437 - 19.532: 99.8079% ( 1) 00:14:25.843 19.627 - 19.721: 99.8156% ( 1) 00:14:25.843 19.721 - 19.816: 99.8233% ( 1) 00:14:25.843 20.196 - 20.290: 99.8310% ( 1) 00:14:25.843 20.764 - 20.859: 99.8386% ( 1) 00:14:25.843 22.092 - 22.187: 99.8463% ( 1) 00:14:25.843 24.273 - 24.462: 99.8540% ( 1) 00:14:25.843 27.117 - 27.307: 99.8617% ( 1) 00:14:25.843 27.307 - 27.496: 99.8694% ( 1) 00:14:25.843 3980.705 - 4004.978: 99.9693% ( 13) 00:14:25.843 4004.978 - 4029.250: 100.0000% ( 4) 00:14:25.843 00:14:25.843 Complete histogram 00:14:25.843 ================== 00:14:25.843 Range in us Cumulative Count 00:14:25.843 2.062 - 2.074: 0.0999% ( 13) 00:14:25.843 2.074 - 2.086: 12.0716% ( 1558) 00:14:25.843 2.086 - 2.098: 35.4234% ( 3039) 00:14:25.843 2.098 - 2.110: 41.2325% ( 756) 00:14:25.843 2.110 - 2.121: 50.3611% ( 1188) 00:14:25.843 2.121 - 2.133: 56.3777% ( 783) 00:14:25.843 2.133 - 2.145: 59.0902% ( 353) 00:14:25.843 2.145 - 2.157: 68.2112% ( 1187) 00:14:25.844 2.157 - 2.169: 75.9413% ( 1006) 00:14:25.844 2.169 - 2.181: 78.6000% ( 346) 00:14:25.844 2.181 - 2.193: 82.9338% ( 564) 00:14:25.844 2.193 - 2.204: 85.6616% ( 355) 00:14:25.844 2.204 - 2.216: 86.6759% ( 132) 00:14:25.844 2.216 - 2.228: 88.5585% ( 245) 00:14:25.844 2.228 - 2.240: 90.5871% ( 264) 00:14:25.844 2.240 - 2.252: 92.6387% ( 267) 00:14:25.844 2.252 - 2.264: 93.8681% ( 160) 00:14:25.844 2.264 - 2.276: 94.4521% ( 76) 00:14:25.844 2.276 - 2.287: 94.6596% ( 27) 00:14:25.844 2.287 - 2.299: 94.8594% ( 26) 00:14:25.844 2.299 - 2.311: 95.1130% ( 33) 00:14:25.844 2.311 - 2.323: 95.5894% ( 62) 00:14:25.844 2.323 - 2.335: 95.7892% ( 26) 00:14:25.844 2.335 - 2.347: 95.8045% ( 2) 00:14:25.844 2.347 - 2.359: 95.8506% ( 6) 00:14:25.844 2.359 - 2.370: 95.9044% ( 7) 00:14:25.844 2.370 - 2.382: 95.9889% ( 11) 00:14:25.844 2.382 - 2.394: 96.1887% ( 26) 00:14:25.844 2.394 - 2.406: 96.3731% ( 24) 00:14:25.844 2.406 - 2.418: 96.5422% ( 22) 00:14:25.844 2.418 - 2.430: 96.8034% ( 34) 00:14:25.844 2.430 - 2.441: 97.0416% ( 31) 00:14:25.844 2.441 - 2.453: 97.3106% ( 35) 00:14:25.844 2.453 - 2.465: 97.5257% ( 28) 00:14:25.844 2.465 - 2.477: 97.7793% ( 33) 00:14:25.844 2.477 - 2.489: 97.9484% ( 22) 00:14:25.844 2.489 - 2.501: 98.0329% ( 11) 00:14:25.844 2.501 - 2.513: 98.1251% ( 12) 00:14:25.844 2.513 - 2.524: 98.2327% ( 14) 00:14:25.844 2.524 - 2.536: 98.3249% ( 12) 00:14:25.844 2.536 - 2.548: 98.3479% ( 3) 00:14:25.844 2.548 - 2.560: 98.3710% ( 3) 00:14:25.844 2.560 - 2.572: 98.3864% ( 2) 00:14:25.844 2.572 - 2.584: 98.3940% ( 1) 00:14:25.844 2.584 - 2.596: 98.4171% ( 3) 00:14:25.844 2.596 - 2.607: 98.4325% ( 2) 00:14:25.844 2.607 - 2.619: 98.4555% ( 3) 00:14:25.844 2.619 - 2.631: 98.4632% ( 1) 00:14:25.844 2.631 - 2.643: 98.4786% ( 2) 00:14:25.844 2.643 - 2.655: 98.4862% ( 1) 00:14:25.844 2.667 - 2.679: 98.4939% ( 1) 00:14:25.844 2.690 - 2.702: 98.5093% ( 2) 00:14:25.844 2.714 - 2.726: 98.5247% ( 2) 00:14:25.844 2.738 - 2.750: 98.5323% ( 1) 00:14:25.844 2.761 - 2.773: 98.5400% ( 1) 00:14:25.844 2.773 - 2.785: 98.5477% ( 1) 00:14:25.844 2.785 - 2.797: 98.5631% ( 2) 00:14:25.844 2.821 - 2.833: 98.5708% ( 1) 00:14:25.844 2.916 - 2.927: 98.5785% ( 1) 00:14:25.844 3.342 - 3.366: 98.5938% ( 2) 00:14:25.844 3.366 - 3.390: 9[2024-07-25 09:28:58.490934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.844 8.6015% ( 1) 00:14:25.844 3.484 - 3.508: 98.6169% ( 2) 00:14:25.844 3.532 - 3.556: 98.6246% ( 1) 00:14:25.844 3.579 - 3.603: 98.6322% ( 1) 00:14:25.844 3.603 - 3.627: 98.6553% ( 3) 00:14:25.844 3.627 - 3.650: 98.6630% ( 1) 00:14:25.844 3.650 - 3.674: 98.6783% ( 2) 00:14:25.844 3.698 - 3.721: 98.7014% ( 3) 00:14:25.844 3.745 - 3.769: 98.7168% ( 2) 00:14:25.844 3.816 - 3.840: 98.7245% ( 1) 00:14:25.844 3.935 - 3.959: 98.7321% ( 1) 00:14:25.844 3.959 - 3.982: 98.7398% ( 1) 00:14:25.844 3.982 - 4.006: 98.7475% ( 1) 00:14:25.844 4.053 - 4.077: 98.7552% ( 1) 00:14:25.844 4.101 - 4.124: 98.7706% ( 2) 00:14:25.844 5.215 - 5.239: 98.7782% ( 1) 00:14:25.844 5.239 - 5.262: 98.7859% ( 1) 00:14:25.844 5.381 - 5.404: 98.7936% ( 1) 00:14:25.844 5.547 - 5.570: 98.8013% ( 1) 00:14:25.844 6.021 - 6.044: 98.8090% ( 1) 00:14:25.844 6.210 - 6.258: 98.8167% ( 1) 00:14:25.844 6.258 - 6.305: 98.8243% ( 1) 00:14:25.844 6.353 - 6.400: 98.8320% ( 1) 00:14:25.844 6.590 - 6.637: 98.8397% ( 1) 00:14:25.844 6.684 - 6.732: 98.8474% ( 1) 00:14:25.844 7.490 - 7.538: 98.8551% ( 1) 00:14:25.844 7.727 - 7.775: 98.8628% ( 1) 00:14:25.844 8.059 - 8.107: 98.8704% ( 1) 00:14:25.844 8.107 - 8.154: 98.8781% ( 1) 00:14:25.844 8.913 - 8.960: 98.8858% ( 1) 00:14:25.844 9.197 - 9.244: 98.8935% ( 1) 00:14:25.844 12.705 - 12.800: 98.9012% ( 1) 00:14:25.844 15.550 - 15.644: 98.9166% ( 2) 00:14:25.844 15.644 - 15.739: 98.9242% ( 1) 00:14:25.844 15.739 - 15.834: 98.9473% ( 3) 00:14:25.844 15.929 - 16.024: 98.9627% ( 2) 00:14:25.844 16.024 - 16.119: 98.9703% ( 1) 00:14:25.844 16.119 - 16.213: 98.9934% ( 3) 00:14:25.844 16.213 - 16.308: 99.0088% ( 2) 00:14:25.844 16.308 - 16.403: 99.0779% ( 9) 00:14:25.844 16.403 - 16.498: 99.1240% ( 6) 00:14:25.844 16.498 - 16.593: 99.1394% ( 2) 00:14:25.844 16.593 - 16.687: 99.2009% ( 8) 00:14:25.844 16.687 - 16.782: 99.2239% ( 3) 00:14:25.844 16.782 - 16.877: 99.2623% ( 5) 00:14:25.844 16.877 - 16.972: 99.2854% ( 3) 00:14:25.844 16.972 - 17.067: 99.3008% ( 2) 00:14:25.844 17.161 - 17.256: 99.3084% ( 1) 00:14:25.844 17.256 - 17.351: 99.3161% ( 1) 00:14:25.844 17.351 - 17.446: 99.3392% ( 3) 00:14:25.844 17.636 - 17.730: 99.3469% ( 1) 00:14:25.844 17.825 - 17.920: 99.3622% ( 2) 00:14:25.844 18.015 - 18.110: 99.3699% ( 1) 00:14:25.844 18.110 - 18.204: 99.3776% ( 1) 00:14:25.844 18.679 - 18.773: 99.3853% ( 1) 00:14:25.844 3252.527 - 3276.800: 99.3930% ( 1) 00:14:25.844 3980.705 - 4004.978: 99.8002% ( 53) 00:14:25.844 4004.978 - 4029.250: 100.0000% ( 26) 00:14:25.844 00:14:25.844 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:25.845 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:25.845 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:25.845 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:25.845 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.102 [ 00:14:26.102 { 00:14:26.102 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.102 "subtype": "Discovery", 00:14:26.102 "listen_addresses": [], 00:14:26.102 "allow_any_host": true, 00:14:26.102 "hosts": [] 00:14:26.102 }, 00:14:26.102 { 00:14:26.102 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.102 "subtype": "NVMe", 00:14:26.102 "listen_addresses": [ 00:14:26.103 { 00:14:26.103 "trtype": "VFIOUSER", 00:14:26.103 "adrfam": "IPv4", 00:14:26.103 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.103 "trsvcid": "0" 00:14:26.103 } 00:14:26.103 ], 00:14:26.103 "allow_any_host": true, 00:14:26.103 "hosts": [], 00:14:26.103 "serial_number": "SPDK1", 00:14:26.103 "model_number": "SPDK bdev Controller", 00:14:26.103 "max_namespaces": 32, 00:14:26.103 "min_cntlid": 1, 00:14:26.103 "max_cntlid": 65519, 00:14:26.103 "namespaces": [ 00:14:26.103 { 00:14:26.103 "nsid": 1, 00:14:26.103 "bdev_name": "Malloc1", 00:14:26.103 "name": "Malloc1", 00:14:26.103 "nguid": "44F3A138EC394266BEBCDA035D441F8D", 00:14:26.103 "uuid": "44f3a138-ec39-4266-bebc-da035d441f8d" 00:14:26.103 } 00:14:26.103 ] 00:14:26.103 }, 00:14:26.103 { 00:14:26.103 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.103 "subtype": "NVMe", 00:14:26.103 "listen_addresses": [ 00:14:26.103 { 00:14:26.103 "trtype": "VFIOUSER", 00:14:26.103 "adrfam": "IPv4", 00:14:26.103 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.103 "trsvcid": "0" 00:14:26.103 } 00:14:26.103 ], 00:14:26.103 "allow_any_host": true, 00:14:26.103 "hosts": [], 00:14:26.103 "serial_number": "SPDK2", 00:14:26.103 "model_number": "SPDK bdev Controller", 00:14:26.103 "max_namespaces": 32, 00:14:26.103 "min_cntlid": 1, 00:14:26.103 "max_cntlid": 65519, 00:14:26.103 "namespaces": [ 00:14:26.103 { 00:14:26.103 "nsid": 1, 00:14:26.103 "bdev_name": "Malloc2", 00:14:26.103 "name": "Malloc2", 00:14:26.103 "nguid": "B8038D2F63744A218E81D06BF0E51759", 00:14:26.103 "uuid": "b8038d2f-6374-4a21-8e81-d06bf0e51759" 00:14:26.103 } 00:14:26.103 ] 00:14:26.103 } 00:14:26.103 ] 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=502789 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # i=1 00:14:26.103 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # sleep 0.1 00:14:26.103 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.360 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # i=2 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # sleep 0.1 00:14:26.361 [2024-07-25 09:28:58.924228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:26.361 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:26.618 Malloc3 00:14:26.618 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:26.875 [2024-07-25 09:28:59.485291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.875 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.875 Asynchronous Event Request test 00:14:26.875 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.875 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.875 Registering asynchronous event callbacks... 00:14:26.875 Starting namespace attribute notice tests for all controllers... 00:14:26.875 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:26.875 aer_cb - Changed Namespace 00:14:26.875 Cleaning up... 00:14:27.134 [ 00:14:27.134 { 00:14:27.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.134 "subtype": "Discovery", 00:14:27.134 "listen_addresses": [], 00:14:27.134 "allow_any_host": true, 00:14:27.134 "hosts": [] 00:14:27.134 }, 00:14:27.134 { 00:14:27.134 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.134 "subtype": "NVMe", 00:14:27.134 "listen_addresses": [ 00:14:27.134 { 00:14:27.134 "trtype": "VFIOUSER", 00:14:27.134 "adrfam": "IPv4", 00:14:27.134 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.134 "trsvcid": "0" 00:14:27.134 } 00:14:27.134 ], 00:14:27.134 "allow_any_host": true, 00:14:27.134 "hosts": [], 00:14:27.134 "serial_number": "SPDK1", 00:14:27.135 "model_number": "SPDK bdev Controller", 00:14:27.135 "max_namespaces": 32, 00:14:27.135 "min_cntlid": 1, 00:14:27.135 "max_cntlid": 65519, 00:14:27.135 "namespaces": [ 00:14:27.135 { 00:14:27.135 "nsid": 1, 00:14:27.135 "bdev_name": "Malloc1", 00:14:27.135 "name": "Malloc1", 00:14:27.135 "nguid": "44F3A138EC394266BEBCDA035D441F8D", 00:14:27.135 "uuid": "44f3a138-ec39-4266-bebc-da035d441f8d" 00:14:27.135 }, 00:14:27.135 { 00:14:27.135 "nsid": 2, 00:14:27.135 "bdev_name": "Malloc3", 00:14:27.135 "name": "Malloc3", 00:14:27.135 "nguid": "E7D2D2E46542451596D8462529FE9C84", 00:14:27.135 "uuid": "e7d2d2e4-6542-4515-96d8-462529fe9c84" 00:14:27.135 } 00:14:27.135 ] 00:14:27.135 }, 00:14:27.135 { 00:14:27.135 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.135 "subtype": "NVMe", 00:14:27.135 "listen_addresses": [ 00:14:27.135 { 00:14:27.135 "trtype": "VFIOUSER", 00:14:27.135 "adrfam": "IPv4", 00:14:27.135 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.135 "trsvcid": "0" 00:14:27.135 } 00:14:27.135 ], 00:14:27.135 "allow_any_host": true, 00:14:27.135 "hosts": [], 00:14:27.135 "serial_number": "SPDK2", 00:14:27.135 "model_number": "SPDK bdev Controller", 00:14:27.135 "max_namespaces": 32, 00:14:27.135 "min_cntlid": 1, 00:14:27.135 "max_cntlid": 65519, 00:14:27.135 "namespaces": [ 00:14:27.135 { 00:14:27.135 "nsid": 1, 00:14:27.135 "bdev_name": "Malloc2", 00:14:27.135 "name": "Malloc2", 00:14:27.135 "nguid": "B8038D2F63744A218E81D06BF0E51759", 00:14:27.135 "uuid": "b8038d2f-6374-4a21-8e81-d06bf0e51759" 00:14:27.135 } 00:14:27.135 ] 00:14:27.135 } 00:14:27.135 ] 00:14:27.135 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 502789 00:14:27.135 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.135 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:27.135 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:27.135 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:27.135 [2024-07-25 09:28:59.763431] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:14:27.135 [2024-07-25 09:28:59.763475] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502924 ] 00:14:27.135 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.135 [2024-07-25 09:28:59.798530] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:27.135 [2024-07-25 09:28:59.804638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.135 [2024-07-25 09:28:59.804683] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbe29226000 00:14:27.135 [2024-07-25 09:28:59.805654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.806662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.807669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.808674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.809674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.810676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.811687] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.812695] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.135 [2024-07-25 09:28:59.813720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.135 [2024-07-25 09:28:59.813742] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbe2921b000 00:14:27.135 [2024-07-25 09:28:59.814879] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.135 [2024-07-25 09:28:59.828944] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:27.135 [2024-07-25 09:28:59.828974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:27.135 [2024-07-25 09:28:59.834081] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:27.135 [2024-07-25 09:28:59.834134] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:27.135 [2024-07-25 09:28:59.834222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:27.135 [2024-07-25 09:28:59.834244] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:27.135 [2024-07-25 09:28:59.834254] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:27.135 [2024-07-25 09:28:59.835085] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:27.135 [2024-07-25 09:28:59.835111] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:27.135 [2024-07-25 09:28:59.835126] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:27.135 [2024-07-25 09:28:59.836093] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:27.135 [2024-07-25 09:28:59.836113] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:27.135 [2024-07-25 09:28:59.836126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.136 [2024-07-25 09:28:59.837103] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:27.136 [2024-07-25 09:28:59.837123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.136 [2024-07-25 09:28:59.838107] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:27.136 [2024-07-25 09:28:59.838126] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:27.136 [2024-07-25 09:28:59.838135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:27.136 [2024-07-25 09:28:59.838147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.136 [2024-07-25 09:28:59.838256] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:27.136 [2024-07-25 09:28:59.838264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.136 [2024-07-25 09:28:59.838272] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:27.136 [2024-07-25 09:28:59.839112] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:27.136 [2024-07-25 09:28:59.840117] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:27.136 [2024-07-25 09:28:59.841126] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:27.136 [2024-07-25 09:28:59.842124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.136 [2024-07-25 09:28:59.842203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.136 [2024-07-25 09:28:59.843148] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:27.136 [2024-07-25 09:28:59.843171] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.136 [2024-07-25 09:28:59.843182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.843205] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:27.136 [2024-07-25 09:28:59.843218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.843238] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.136 [2024-07-25 09:28:59.843247] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.136 [2024-07-25 09:28:59.843254] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.136 [2024-07-25 09:28:59.843269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.136 [2024-07-25 09:28:59.851370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:27.136 [2024-07-25 09:28:59.851392] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:27.136 [2024-07-25 09:28:59.851401] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:27.136 [2024-07-25 09:28:59.851409] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:27.136 [2024-07-25 09:28:59.851417] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:27.136 [2024-07-25 09:28:59.851425] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:27.136 [2024-07-25 09:28:59.851434] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:27.136 [2024-07-25 09:28:59.851442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.851455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.851476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:27.136 [2024-07-25 09:28:59.859367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:27.136 [2024-07-25 09:28:59.859397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.136 [2024-07-25 09:28:59.859411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.136 [2024-07-25 09:28:59.859424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.136 [2024-07-25 09:28:59.859436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.136 [2024-07-25 09:28:59.859445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.859461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.859476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:27.136 [2024-07-25 09:28:59.867369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:27.136 [2024-07-25 09:28:59.867388] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:27.136 [2024-07-25 09:28:59.867397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.867413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.867425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:27.136 [2024-07-25 09:28:59.867439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.395 [2024-07-25 09:28:59.875380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:27.395 [2024-07-25 09:28:59.875454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.875472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.875485] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:27.395 [2024-07-25 09:28:59.875493] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:27.395 [2024-07-25 09:28:59.875500] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.395 [2024-07-25 09:28:59.875509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:27.395 [2024-07-25 09:28:59.883380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:27.395 [2024-07-25 09:28:59.883408] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:27.395 [2024-07-25 09:28:59.883424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.883439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.883452] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.395 [2024-07-25 09:28:59.883460] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.395 [2024-07-25 09:28:59.883467] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.395 [2024-07-25 09:28:59.883476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.395 [2024-07-25 09:28:59.891382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:27.395 [2024-07-25 09:28:59.891409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.891425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.891439] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.395 [2024-07-25 09:28:59.891451] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.395 [2024-07-25 09:28:59.891458] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.395 [2024-07-25 09:28:59.891467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.395 [2024-07-25 09:28:59.899381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:27.395 [2024-07-25 09:28:59.899402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.899415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.899431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.899444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.899453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:27.395 [2024-07-25 09:28:59.899461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:27.396 [2024-07-25 09:28:59.899470] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:27.396 [2024-07-25 09:28:59.899477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:27.396 [2024-07-25 09:28:59.899486] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:27.396 [2024-07-25 09:28:59.899509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.907382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.907418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.915368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.915393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.923366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.923390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.928408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.928440] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:27.396 [2024-07-25 09:28:59.928451] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:27.396 [2024-07-25 09:28:59.928457] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:27.396 [2024-07-25 09:28:59.928463] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:27.396 [2024-07-25 09:28:59.928469] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:27.396 [2024-07-25 09:28:59.928479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:27.396 [2024-07-25 09:28:59.928495] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:27.396 [2024-07-25 09:28:59.928505] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:27.396 [2024-07-25 09:28:59.928511] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.396 [2024-07-25 09:28:59.928520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.928531] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:27.396 [2024-07-25 09:28:59.928540] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.396 [2024-07-25 09:28:59.928546] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.396 [2024-07-25 09:28:59.928554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.928566] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:27.396 [2024-07-25 09:28:59.928575] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:27.396 [2024-07-25 09:28:59.928581] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.396 [2024-07-25 09:28:59.928589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:27.396 [2024-07-25 09:28:59.939381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.939410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.939428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:27.396 [2024-07-25 09:28:59.939440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:27.396 ===================================================== 00:14:27.396 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:27.396 ===================================================== 00:14:27.396 Controller Capabilities/Features 00:14:27.396 ================================ 00:14:27.396 Vendor ID: 4e58 00:14:27.396 Subsystem Vendor ID: 4e58 00:14:27.396 Serial Number: SPDK2 00:14:27.396 Model Number: SPDK bdev Controller 00:14:27.396 Firmware Version: 24.09 00:14:27.396 Recommended Arb Burst: 6 00:14:27.396 IEEE OUI Identifier: 8d 6b 50 00:14:27.396 Multi-path I/O 00:14:27.396 May have multiple subsystem ports: Yes 00:14:27.396 May have multiple controllers: Yes 00:14:27.396 Associated with SR-IOV VF: No 00:14:27.396 Max Data Transfer Size: 131072 00:14:27.396 Max Number of Namespaces: 32 00:14:27.396 Max Number of I/O Queues: 127 00:14:27.396 NVMe Specification Version (VS): 1.3 00:14:27.396 NVMe Specification Version (Identify): 1.3 00:14:27.396 Maximum Queue Entries: 256 00:14:27.396 Contiguous Queues Required: Yes 00:14:27.396 Arbitration Mechanisms Supported 00:14:27.396 Weighted Round Robin: Not Supported 00:14:27.396 Vendor Specific: Not Supported 00:14:27.396 Reset Timeout: 15000 ms 00:14:27.396 Doorbell Stride: 4 bytes 00:14:27.396 NVM Subsystem Reset: Not Supported 00:14:27.396 Command Sets Supported 00:14:27.396 NVM Command Set: Supported 00:14:27.396 Boot Partition: Not Supported 00:14:27.396 Memory Page Size Minimum: 4096 bytes 00:14:27.396 Memory Page Size Maximum: 4096 bytes 00:14:27.396 Persistent Memory Region: Not Supported 00:14:27.396 Optional Asynchronous Events Supported 00:14:27.396 Namespace Attribute Notices: Supported 00:14:27.396 Firmware Activation Notices: Not Supported 00:14:27.396 ANA Change Notices: Not Supported 00:14:27.396 PLE Aggregate Log Change Notices: Not Supported 00:14:27.396 LBA Status Info Alert Notices: Not Supported 00:14:27.396 EGE Aggregate Log Change Notices: Not Supported 00:14:27.396 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.396 Zone Descriptor Change Notices: Not Supported 00:14:27.396 Discovery Log Change Notices: Not Supported 00:14:27.396 Controller Attributes 00:14:27.396 128-bit Host Identifier: Supported 00:14:27.396 Non-Operational Permissive Mode: Not Supported 00:14:27.396 NVM Sets: Not Supported 00:14:27.396 Read Recovery Levels: Not Supported 00:14:27.396 Endurance Groups: Not Supported 00:14:27.396 Predictable Latency Mode: Not Supported 00:14:27.396 Traffic Based Keep ALive: Not Supported 00:14:27.396 Namespace Granularity: Not Supported 00:14:27.396 SQ Associations: Not Supported 00:14:27.396 UUID List: Not Supported 00:14:27.396 Multi-Domain Subsystem: Not Supported 00:14:27.396 Fixed Capacity Management: Not Supported 00:14:27.396 Variable Capacity Management: Not Supported 00:14:27.396 Delete Endurance Group: Not Supported 00:14:27.396 Delete NVM Set: Not Supported 00:14:27.396 Extended LBA Formats Supported: Not Supported 00:14:27.396 Flexible Data Placement Supported: Not Supported 00:14:27.396 00:14:27.396 Controller Memory Buffer Support 00:14:27.396 ================================ 00:14:27.396 Supported: No 00:14:27.396 00:14:27.396 Persistent Memory Region Support 00:14:27.396 ================================ 00:14:27.396 Supported: No 00:14:27.396 00:14:27.396 Admin Command Set Attributes 00:14:27.396 ============================ 00:14:27.396 Security Send/Receive: Not Supported 00:14:27.396 Format NVM: Not Supported 00:14:27.396 Firmware Activate/Download: Not Supported 00:14:27.396 Namespace Management: Not Supported 00:14:27.396 Device Self-Test: Not Supported 00:14:27.396 Directives: Not Supported 00:14:27.396 NVMe-MI: Not Supported 00:14:27.396 Virtualization Management: Not Supported 00:14:27.396 Doorbell Buffer Config: Not Supported 00:14:27.396 Get LBA Status Capability: Not Supported 00:14:27.396 Command & Feature Lockdown Capability: Not Supported 00:14:27.396 Abort Command Limit: 4 00:14:27.396 Async Event Request Limit: 4 00:14:27.396 Number of Firmware Slots: N/A 00:14:27.396 Firmware Slot 1 Read-Only: N/A 00:14:27.396 Firmware Activation Without Reset: N/A 00:14:27.396 Multiple Update Detection Support: N/A 00:14:27.396 Firmware Update Granularity: No Information Provided 00:14:27.396 Per-Namespace SMART Log: No 00:14:27.396 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.396 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:27.396 Command Effects Log Page: Supported 00:14:27.396 Get Log Page Extended Data: Supported 00:14:27.396 Telemetry Log Pages: Not Supported 00:14:27.396 Persistent Event Log Pages: Not Supported 00:14:27.396 Supported Log Pages Log Page: May Support 00:14:27.396 Commands Supported & Effects Log Page: Not Supported 00:14:27.396 Feature Identifiers & Effects Log Page:May Support 00:14:27.396 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.396 Data Area 4 for Telemetry Log: Not Supported 00:14:27.396 Error Log Page Entries Supported: 128 00:14:27.396 Keep Alive: Supported 00:14:27.396 Keep Alive Granularity: 10000 ms 00:14:27.396 00:14:27.396 NVM Command Set Attributes 00:14:27.396 ========================== 00:14:27.396 Submission Queue Entry Size 00:14:27.396 Max: 64 00:14:27.396 Min: 64 00:14:27.396 Completion Queue Entry Size 00:14:27.396 Max: 16 00:14:27.396 Min: 16 00:14:27.397 Number of Namespaces: 32 00:14:27.397 Compare Command: Supported 00:14:27.397 Write Uncorrectable Command: Not Supported 00:14:27.397 Dataset Management Command: Supported 00:14:27.397 Write Zeroes Command: Supported 00:14:27.397 Set Features Save Field: Not Supported 00:14:27.397 Reservations: Not Supported 00:14:27.397 Timestamp: Not Supported 00:14:27.397 Copy: Supported 00:14:27.397 Volatile Write Cache: Present 00:14:27.397 Atomic Write Unit (Normal): 1 00:14:27.397 Atomic Write Unit (PFail): 1 00:14:27.397 Atomic Compare & Write Unit: 1 00:14:27.397 Fused Compare & Write: Supported 00:14:27.397 Scatter-Gather List 00:14:27.397 SGL Command Set: Supported (Dword aligned) 00:14:27.397 SGL Keyed: Not Supported 00:14:27.397 SGL Bit Bucket Descriptor: Not Supported 00:14:27.397 SGL Metadata Pointer: Not Supported 00:14:27.397 Oversized SGL: Not Supported 00:14:27.397 SGL Metadata Address: Not Supported 00:14:27.397 SGL Offset: Not Supported 00:14:27.397 Transport SGL Data Block: Not Supported 00:14:27.397 Replay Protected Memory Block: Not Supported 00:14:27.397 00:14:27.397 Firmware Slot Information 00:14:27.397 ========================= 00:14:27.397 Active slot: 1 00:14:27.397 Slot 1 Firmware Revision: 24.09 00:14:27.397 00:14:27.397 00:14:27.397 Commands Supported and Effects 00:14:27.397 ============================== 00:14:27.397 Admin Commands 00:14:27.397 -------------- 00:14:27.397 Get Log Page (02h): Supported 00:14:27.397 Identify (06h): Supported 00:14:27.397 Abort (08h): Supported 00:14:27.397 Set Features (09h): Supported 00:14:27.397 Get Features (0Ah): Supported 00:14:27.397 Asynchronous Event Request (0Ch): Supported 00:14:27.397 Keep Alive (18h): Supported 00:14:27.397 I/O Commands 00:14:27.397 ------------ 00:14:27.397 Flush (00h): Supported LBA-Change 00:14:27.397 Write (01h): Supported LBA-Change 00:14:27.397 Read (02h): Supported 00:14:27.397 Compare (05h): Supported 00:14:27.397 Write Zeroes (08h): Supported LBA-Change 00:14:27.397 Dataset Management (09h): Supported LBA-Change 00:14:27.397 Copy (19h): Supported LBA-Change 00:14:27.397 00:14:27.397 Error Log 00:14:27.397 ========= 00:14:27.397 00:14:27.397 Arbitration 00:14:27.397 =========== 00:14:27.397 Arbitration Burst: 1 00:14:27.397 00:14:27.397 Power Management 00:14:27.397 ================ 00:14:27.397 Number of Power States: 1 00:14:27.397 Current Power State: Power State #0 00:14:27.397 Power State #0: 00:14:27.397 Max Power: 0.00 W 00:14:27.397 Non-Operational State: Operational 00:14:27.397 Entry Latency: Not Reported 00:14:27.397 Exit Latency: Not Reported 00:14:27.397 Relative Read Throughput: 0 00:14:27.397 Relative Read Latency: 0 00:14:27.397 Relative Write Throughput: 0 00:14:27.397 Relative Write Latency: 0 00:14:27.397 Idle Power: Not Reported 00:14:27.397 Active Power: Not Reported 00:14:27.397 Non-Operational Permissive Mode: Not Supported 00:14:27.397 00:14:27.397 Health Information 00:14:27.397 ================== 00:14:27.397 Critical Warnings: 00:14:27.397 Available Spare Space: OK 00:14:27.397 Temperature: OK 00:14:27.397 Device Reliability: OK 00:14:27.397 Read Only: No 00:14:27.397 Volatile Memory Backup: OK 00:14:27.397 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:27.397 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:27.397 Available Spare: 0% 00:14:27.397 Available Sp[2024-07-25 09:28:59.939559] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:27.397 [2024-07-25 09:28:59.947367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:27.397 [2024-07-25 09:28:59.947419] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:27.397 [2024-07-25 09:28:59.947436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.397 [2024-07-25 09:28:59.947449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.397 [2024-07-25 09:28:59.947459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.397 [2024-07-25 09:28:59.947469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.397 [2024-07-25 09:28:59.947553] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:27.397 [2024-07-25 09:28:59.947575] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:27.397 [2024-07-25 09:28:59.948562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.397 [2024-07-25 09:28:59.948653] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:27.397 [2024-07-25 09:28:59.948668] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:27.397 [2024-07-25 09:28:59.949575] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:27.397 [2024-07-25 09:28:59.949600] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:27.397 [2024-07-25 09:28:59.949658] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:27.397 [2024-07-25 09:28:59.950847] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.397 are Threshold: 0% 00:14:27.397 Life Percentage Used: 0% 00:14:27.397 Data Units Read: 0 00:14:27.397 Data Units Written: 0 00:14:27.397 Host Read Commands: 0 00:14:27.397 Host Write Commands: 0 00:14:27.397 Controller Busy Time: 0 minutes 00:14:27.397 Power Cycles: 0 00:14:27.397 Power On Hours: 0 hours 00:14:27.397 Unsafe Shutdowns: 0 00:14:27.397 Unrecoverable Media Errors: 0 00:14:27.397 Lifetime Error Log Entries: 0 00:14:27.397 Warning Temperature Time: 0 minutes 00:14:27.397 Critical Temperature Time: 0 minutes 00:14:27.397 00:14:27.397 Number of Queues 00:14:27.397 ================ 00:14:27.397 Number of I/O Submission Queues: 127 00:14:27.397 Number of I/O Completion Queues: 127 00:14:27.397 00:14:27.397 Active Namespaces 00:14:27.397 ================= 00:14:27.397 Namespace ID:1 00:14:27.397 Error Recovery Timeout: Unlimited 00:14:27.397 Command Set Identifier: NVM (00h) 00:14:27.397 Deallocate: Supported 00:14:27.397 Deallocated/Unwritten Error: Not Supported 00:14:27.397 Deallocated Read Value: Unknown 00:14:27.397 Deallocate in Write Zeroes: Not Supported 00:14:27.397 Deallocated Guard Field: 0xFFFF 00:14:27.397 Flush: Supported 00:14:27.397 Reservation: Supported 00:14:27.397 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.397 Size (in LBAs): 131072 (0GiB) 00:14:27.397 Capacity (in LBAs): 131072 (0GiB) 00:14:27.397 Utilization (in LBAs): 131072 (0GiB) 00:14:27.397 NGUID: B8038D2F63744A218E81D06BF0E51759 00:14:27.397 UUID: b8038d2f-6374-4a21-8e81-d06bf0e51759 00:14:27.397 Thin Provisioning: Not Supported 00:14:27.397 Per-NS Atomic Units: Yes 00:14:27.397 Atomic Boundary Size (Normal): 0 00:14:27.397 Atomic Boundary Size (PFail): 0 00:14:27.397 Atomic Boundary Offset: 0 00:14:27.397 Maximum Single Source Range Length: 65535 00:14:27.397 Maximum Copy Length: 65535 00:14:27.397 Maximum Source Range Count: 1 00:14:27.397 NGUID/EUI64 Never Reused: No 00:14:27.397 Namespace Write Protected: No 00:14:27.397 Number of LBA Formats: 1 00:14:27.397 Current LBA Format: LBA Format #00 00:14:27.398 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.398 00:14:27.398 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:27.398 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.655 [2024-07-25 09:29:00.176094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:32.914 Initializing NVMe Controllers 00:14:32.914 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:32.914 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:32.914 Initialization complete. Launching workers. 00:14:32.914 ======================================================== 00:14:32.914 Latency(us) 00:14:32.914 Device Information : IOPS MiB/s Average min max 00:14:32.914 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33753.66 131.85 3791.45 1188.24 8255.83 00:14:32.914 ======================================================== 00:14:32.914 Total : 33753.66 131.85 3791.45 1188.24 8255.83 00:14:32.914 00:14:32.914 [2024-07-25 09:29:05.282703] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:32.914 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:32.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.914 [2024-07-25 09:29:05.525382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.172 Initializing NVMe Controllers 00:14:38.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:38.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:38.172 Initialization complete. Launching workers. 00:14:38.172 ======================================================== 00:14:38.172 Latency(us) 00:14:38.172 Device Information : IOPS MiB/s Average min max 00:14:38.172 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31623.60 123.53 4049.21 1201.51 10450.78 00:14:38.172 ======================================================== 00:14:38.172 Total : 31623.60 123.53 4049.21 1201.51 10450.78 00:14:38.172 00:14:38.172 [2024-07-25 09:29:10.547630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.172 09:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:38.172 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.172 [2024-07-25 09:29:10.755403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.436 [2024-07-25 09:29:15.885503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.436 Initializing NVMe Controllers 00:14:43.436 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.436 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:43.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:43.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:43.436 Initialization complete. Launching workers. 00:14:43.436 Starting thread on core 2 00:14:43.436 Starting thread on core 3 00:14:43.436 Starting thread on core 1 00:14:43.436 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:43.436 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.694 [2024-07-25 09:29:16.186912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.874 [2024-07-25 09:29:19.792623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.874 Initializing NVMe Controllers 00:14:47.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:47.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:47.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:47.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:47.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:47.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:47.875 Initialization complete. Launching workers. 00:14:47.875 Starting thread on core 1 with urgent priority queue 00:14:47.875 Starting thread on core 2 with urgent priority queue 00:14:47.875 Starting thread on core 3 with urgent priority queue 00:14:47.875 Starting thread on core 0 with urgent priority queue 00:14:47.875 SPDK bdev Controller (SPDK2 ) core 0: 5040.33 IO/s 19.84 secs/100000 ios 00:14:47.875 SPDK bdev Controller (SPDK2 ) core 1: 5508.33 IO/s 18.15 secs/100000 ios 00:14:47.875 SPDK bdev Controller (SPDK2 ) core 2: 5462.67 IO/s 18.31 secs/100000 ios 00:14:47.875 SPDK bdev Controller (SPDK2 ) core 3: 5253.00 IO/s 19.04 secs/100000 ios 00:14:47.875 ======================================================== 00:14:47.875 00:14:47.875 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:47.875 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.875 [2024-07-25 09:29:20.096894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.875 Initializing NVMe Controllers 00:14:47.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.875 Namespace ID: 1 size: 0GB 00:14:47.875 Initialization complete. 00:14:47.875 INFO: using host memory buffer for IO 00:14:47.875 Hello world! 00:14:47.875 [2024-07-25 09:29:20.106068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.875 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:47.875 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.875 [2024-07-25 09:29:20.399151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.806 Initializing NVMe Controllers 00:14:48.806 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.806 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.806 Initialization complete. Launching workers. 00:14:48.806 submit (in ns) avg, min, max = 8928.1, 3521.1, 4016572.2 00:14:48.806 complete (in ns) avg, min, max = 27051.2, 2072.2, 5996852.2 00:14:48.806 00:14:48.806 Submit histogram 00:14:48.806 ================ 00:14:48.806 Range in us Cumulative Count 00:14:48.806 3.508 - 3.532: 0.0533% ( 7) 00:14:48.806 3.532 - 3.556: 1.2794% ( 161) 00:14:48.806 3.556 - 3.579: 3.2595% ( 260) 00:14:48.806 3.579 - 3.603: 7.8821% ( 607) 00:14:48.806 3.603 - 3.627: 15.6728% ( 1023) 00:14:48.806 3.627 - 3.650: 27.7892% ( 1591) 00:14:48.806 3.650 - 3.674: 37.2553% ( 1243) 00:14:48.806 3.674 - 3.698: 44.4749% ( 948) 00:14:48.806 3.698 - 3.721: 50.1637% ( 747) 00:14:48.806 3.721 - 3.745: 55.6622% ( 722) 00:14:48.806 3.745 - 3.769: 61.2139% ( 729) 00:14:48.806 3.769 - 3.793: 65.6005% ( 576) 00:14:48.806 3.793 - 3.816: 68.9361% ( 438) 00:14:48.806 3.816 - 3.840: 71.9138% ( 391) 00:14:48.806 3.840 - 3.864: 75.5616% ( 479) 00:14:48.806 3.864 - 3.887: 79.1638% ( 473) 00:14:48.806 3.887 - 3.911: 82.7964% ( 477) 00:14:48.806 3.911 - 3.935: 85.5990% ( 368) 00:14:48.806 3.935 - 3.959: 87.3658% ( 232) 00:14:48.806 3.959 - 3.982: 89.1935% ( 240) 00:14:48.806 3.982 - 4.006: 90.6938% ( 197) 00:14:48.806 4.006 - 4.030: 92.1027% ( 185) 00:14:48.806 4.030 - 4.053: 92.9861% ( 116) 00:14:48.806 4.053 - 4.077: 93.7781% ( 104) 00:14:48.806 4.077 - 4.101: 94.5168% ( 97) 00:14:48.806 4.101 - 4.124: 95.2098% ( 91) 00:14:48.806 4.124 - 4.148: 95.6210% ( 54) 00:14:48.806 4.148 - 4.172: 95.9257% ( 40) 00:14:48.806 4.172 - 4.196: 96.2074% ( 37) 00:14:48.806 4.196 - 4.219: 96.4664% ( 34) 00:14:48.806 4.219 - 4.243: 96.6111% ( 19) 00:14:48.806 4.243 - 4.267: 96.7253% ( 15) 00:14:48.806 4.267 - 4.290: 96.8472% ( 16) 00:14:48.806 4.290 - 4.314: 96.9309% ( 11) 00:14:48.806 4.314 - 4.338: 97.0071% ( 10) 00:14:48.806 4.338 - 4.361: 97.1289% ( 16) 00:14:48.806 4.361 - 4.385: 97.1975% ( 9) 00:14:48.806 4.385 - 4.409: 97.2584% ( 8) 00:14:48.806 4.409 - 4.433: 97.3117% ( 7) 00:14:48.806 4.433 - 4.456: 97.3346% ( 3) 00:14:48.806 4.456 - 4.480: 97.3726% ( 5) 00:14:48.806 4.480 - 4.504: 97.4107% ( 5) 00:14:48.806 4.504 - 4.527: 97.4336% ( 3) 00:14:48.806 4.527 - 4.551: 97.4488% ( 2) 00:14:48.806 4.551 - 4.575: 97.4716% ( 3) 00:14:48.806 4.599 - 4.622: 97.4792% ( 1) 00:14:48.806 4.622 - 4.646: 97.4869% ( 1) 00:14:48.806 4.693 - 4.717: 97.4945% ( 1) 00:14:48.806 4.717 - 4.741: 97.5021% ( 1) 00:14:48.806 4.741 - 4.764: 97.5097% ( 1) 00:14:48.806 4.764 - 4.788: 97.5326% ( 3) 00:14:48.806 4.788 - 4.812: 97.5478% ( 2) 00:14:48.806 4.812 - 4.836: 97.6239% ( 10) 00:14:48.806 4.836 - 4.859: 97.6544% ( 4) 00:14:48.806 4.859 - 4.883: 97.7001% ( 6) 00:14:48.806 4.883 - 4.907: 97.7610% ( 8) 00:14:48.806 4.907 - 4.930: 97.8143% ( 7) 00:14:48.806 4.930 - 4.954: 97.8753% ( 8) 00:14:48.806 4.954 - 4.978: 97.9286% ( 7) 00:14:48.806 4.978 - 5.001: 97.9819% ( 7) 00:14:48.806 5.001 - 5.025: 98.0504% ( 9) 00:14:48.806 5.025 - 5.049: 98.0885% ( 5) 00:14:48.806 5.049 - 5.073: 98.1190% ( 4) 00:14:48.806 5.073 - 5.096: 98.1494% ( 4) 00:14:48.806 5.096 - 5.120: 98.1799% ( 4) 00:14:48.806 5.120 - 5.144: 98.2180% ( 5) 00:14:48.806 5.144 - 5.167: 98.2408% ( 3) 00:14:48.806 5.167 - 5.191: 98.2560% ( 2) 00:14:48.806 5.191 - 5.215: 98.2713% ( 2) 00:14:48.806 5.215 - 5.239: 98.2789% ( 1) 00:14:48.806 5.239 - 5.262: 98.2865% ( 1) 00:14:48.806 5.262 - 5.286: 98.2941% ( 1) 00:14:48.806 5.286 - 5.310: 98.3170% ( 3) 00:14:48.806 5.310 - 5.333: 98.3246% ( 1) 00:14:48.806 5.333 - 5.357: 98.3398% ( 2) 00:14:48.806 5.357 - 5.381: 98.3474% ( 1) 00:14:48.806 5.381 - 5.404: 98.3550% ( 1) 00:14:48.806 5.404 - 5.428: 98.3703% ( 2) 00:14:48.806 5.594 - 5.618: 98.3779% ( 1) 00:14:48.806 5.641 - 5.665: 98.3855% ( 1) 00:14:48.806 5.689 - 5.713: 98.3931% ( 1) 00:14:48.806 5.760 - 5.784: 98.4007% ( 1) 00:14:48.806 5.807 - 5.831: 98.4083% ( 1) 00:14:48.806 5.855 - 5.879: 98.4160% ( 1) 00:14:48.806 5.879 - 5.902: 98.4236% ( 1) 00:14:48.806 6.210 - 6.258: 98.4312% ( 1) 00:14:48.806 6.637 - 6.684: 98.4388% ( 1) 00:14:48.806 7.253 - 7.301: 98.4464% ( 1) 00:14:48.806 7.490 - 7.538: 98.4540% ( 1) 00:14:48.806 7.585 - 7.633: 98.4617% ( 1) 00:14:48.806 7.633 - 7.680: 98.4693% ( 1) 00:14:48.806 7.727 - 7.775: 98.4769% ( 1) 00:14:48.806 7.775 - 7.822: 98.4845% ( 1) 00:14:48.806 7.917 - 7.964: 98.4997% ( 2) 00:14:48.806 7.964 - 8.012: 98.5073% ( 1) 00:14:48.806 8.012 - 8.059: 98.5150% ( 1) 00:14:48.806 8.059 - 8.107: 98.5226% ( 1) 00:14:48.806 8.154 - 8.201: 98.5302% ( 1) 00:14:48.806 8.201 - 8.249: 98.5378% ( 1) 00:14:48.806 8.249 - 8.296: 98.5530% ( 2) 00:14:48.806 8.296 - 8.344: 98.5683% ( 2) 00:14:48.806 8.439 - 8.486: 98.5835% ( 2) 00:14:48.806 8.486 - 8.533: 98.5987% ( 2) 00:14:48.806 8.533 - 8.581: 98.6140% ( 2) 00:14:48.806 8.676 - 8.723: 98.6216% ( 1) 00:14:48.806 8.818 - 8.865: 98.6368% ( 2) 00:14:48.806 8.960 - 9.007: 98.6444% ( 1) 00:14:48.806 9.007 - 9.055: 98.6749% ( 4) 00:14:48.806 9.197 - 9.244: 98.6825% ( 1) 00:14:48.806 9.292 - 9.339: 98.6901% ( 1) 00:14:48.806 9.339 - 9.387: 98.7054% ( 2) 00:14:48.806 9.434 - 9.481: 98.7206% ( 2) 00:14:48.806 9.529 - 9.576: 98.7282% ( 1) 00:14:48.806 9.576 - 9.624: 98.7358% ( 1) 00:14:48.806 9.624 - 9.671: 98.7434% ( 1) 00:14:48.806 9.719 - 9.766: 98.7510% ( 1) 00:14:48.806 9.956 - 10.003: 98.7587% ( 1) 00:14:48.806 10.003 - 10.050: 98.7663% ( 1) 00:14:48.806 10.193 - 10.240: 98.7739% ( 1) 00:14:48.806 10.287 - 10.335: 98.7815% ( 1) 00:14:48.806 10.524 - 10.572: 98.7967% ( 2) 00:14:48.806 10.572 - 10.619: 98.8044% ( 1) 00:14:48.806 10.714 - 10.761: 98.8120% ( 1) 00:14:48.806 10.761 - 10.809: 98.8272% ( 2) 00:14:48.806 10.951 - 10.999: 98.8348% ( 1) 00:14:48.806 11.093 - 11.141: 98.8424% ( 1) 00:14:48.806 11.473 - 11.520: 98.8577% ( 2) 00:14:48.806 11.567 - 11.615: 98.8653% ( 1) 00:14:48.806 11.662 - 11.710: 98.8729% ( 1) 00:14:48.806 11.710 - 11.757: 98.8805% ( 1) 00:14:48.806 12.041 - 12.089: 98.8881% ( 1) 00:14:48.806 12.326 - 12.421: 98.8957% ( 1) 00:14:48.806 12.421 - 12.516: 98.9034% ( 1) 00:14:48.806 12.516 - 12.610: 98.9110% ( 1) 00:14:48.806 12.705 - 12.800: 98.9262% ( 2) 00:14:48.806 12.990 - 13.084: 98.9338% ( 1) 00:14:48.806 13.084 - 13.179: 98.9414% ( 1) 00:14:48.806 13.559 - 13.653: 98.9491% ( 1) 00:14:48.806 13.653 - 13.748: 98.9567% ( 1) 00:14:48.806 14.222 - 14.317: 98.9795% ( 3) 00:14:48.806 14.317 - 14.412: 98.9871% ( 1) 00:14:48.806 14.791 - 14.886: 98.9947% ( 1) 00:14:48.806 17.067 - 17.161: 99.0100% ( 2) 00:14:48.806 17.161 - 17.256: 99.0176% ( 1) 00:14:48.806 17.256 - 17.351: 99.0252% ( 1) 00:14:48.806 17.351 - 17.446: 99.0709% ( 6) 00:14:48.806 17.446 - 17.541: 99.0861% ( 2) 00:14:48.806 17.541 - 17.636: 99.1242% ( 5) 00:14:48.806 17.636 - 17.730: 99.1775% ( 7) 00:14:48.806 17.730 - 17.825: 99.2232% ( 6) 00:14:48.806 17.825 - 17.920: 99.2537% ( 4) 00:14:48.806 17.920 - 18.015: 99.3222% ( 9) 00:14:48.806 18.015 - 18.110: 99.3527% ( 4) 00:14:48.806 18.110 - 18.204: 99.3984% ( 6) 00:14:48.807 18.204 - 18.299: 99.4669% ( 9) 00:14:48.807 18.299 - 18.394: 99.5355% ( 9) 00:14:48.807 18.394 - 18.489: 99.6345% ( 13) 00:14:48.807 18.489 - 18.584: 99.6573% ( 3) 00:14:48.807 18.584 - 18.679: 99.7030% ( 6) 00:14:48.807 18.679 - 18.773: 99.7487% ( 6) 00:14:48.807 18.773 - 18.868: 99.7639% ( 2) 00:14:48.807 18.868 - 18.963: 99.7868% ( 3) 00:14:48.807 18.963 - 19.058: 99.7944% ( 1) 00:14:48.807 19.058 - 19.153: 99.8248% ( 4) 00:14:48.807 19.153 - 19.247: 99.8401% ( 2) 00:14:48.807 19.532 - 19.627: 99.8477% ( 1) 00:14:48.807 22.566 - 22.661: 99.8553% ( 1) 00:14:48.807 23.324 - 23.419: 99.8629% ( 1) 00:14:48.807 25.790 - 25.979: 99.8705% ( 1) 00:14:48.807 1577.719 - 1589.855: 99.8782% ( 1) 00:14:48.807 3980.705 - 4004.978: 99.9695% ( 12) 00:14:48.807 4004.978 - 4029.250: 100.0000% ( 4) 00:14:48.807 00:14:48.807 Complete histogram 00:14:48.807 ================== 00:14:48.807 Range in us Cumulative Count 00:14:48.807 2.062 - 2.074: 0.0152% ( 2) 00:14:48.807 2.074 - 2.086: 11.0121% ( 1444) 00:14:48.807 2.086 - 2.098: 31.3990% ( 2677) 00:14:48.807 2.098 - 2.110: 34.9098% ( 461) 00:14:48.807 2.110 - 2.121: 49.3336% ( 1894) 00:14:48.807 2.121 - 2.133: 58.7693% ( 1239) 00:14:48.807 2.133 - 2.145: 61.2748% ( 329) 00:14:48.807 2.145 - 2.157: 68.9666% ( 1010) 00:14:48.807 2.157 - 2.169: 74.8458% ( 772) 00:14:48.807 2.169 - 2.181: 76.8944% ( 269) 00:14:48.807 2.181 - 2.193: 83.2991% ( 841) 00:14:48.807 2.193 - 2.204: 87.0535% ( 493) 00:14:48.807 2.204 - 2.216: 87.9141% ( 113) 00:14:48.807 2.216 - 2.228: 89.4981% ( 208) 00:14:48.807 2.228 - 2.240: 91.1659% ( 219) 00:14:48.807 2.240 - 2.252: 92.9099% ( 229) 00:14:48.807 2.252 - 2.264: 94.2274% ( 173) 00:14:48.807 2.264 - 2.276: 94.8214% ( 78) 00:14:48.807 2.276 - 2.287: 94.9661% ( 19) 00:14:48.807 2.287 - 2.299: 95.1717% ( 27) 00:14:48.807 2.299 - 2.311: 95.4535% ( 37) 00:14:48.807 2.311 - 2.323: 95.8800% ( 56) 00:14:48.807 2.323 - 2.335: 96.0323% ( 20) 00:14:48.807 2.335 - 2.347: 96.0780% ( 6) 00:14:48.807 2.347 - 2.359: 96.1465% ( 9) 00:14:48.807 2.359 - 2.370: 96.1770% ( 4) 00:14:48.807 2.370 - 2.382: 96.1998% ( 3) 00:14:48.807 2.382 - 2.394: 96.3445% ( 19) 00:14:48.807 2.394 - 2.406: 96.4588% ( 15) 00:14:48.807 2.406 - 2.418: 96.5882% ( 17) 00:14:48.807 2.418 - 2.430: 96.7405% ( 20) 00:14:48.807 2.430 - 2.441: 96.9233% ( 24) 00:14:48.807 2.441 - 2.453: 97.1061% ( 24) 00:14:48.807 2.453 - 2.465: 97.2965% ( 25) 00:14:48.807 2.465 - 2.477: 97.5402% ( 32) 00:14:48.807 2.477 - 2.489: 97.6925% ( 20) 00:14:48.807 2.489 - 2.501: 97.8905% ( 26) 00:14:48.807 2.501 - 2.513: 98.0656% ( 23) 00:14:48.807 2.513 - 2.524: 98.1113% ( 6) 00:14:48.807 2.524 - 2.536: 98.2180% ( 14) 00:14:48.807 2.536 - 2.548: 98.3170% ( 13) 00:14:48.807 2.548 - 2.560: 98.3627% ( 6) 00:14:48.807 2.560 - 2.572: 98.4083% ( 6) 00:14:48.807 2.572 - 2.584: 98.4845% ( 10) 00:14:48.807 2.584 - 2.596: 98.5073% ( 3) 00:14:48.807 2.596 - 2.607: 98.5302% ( 3) 00:14:48.807 2.607 - 2.619: 98.5454% ( 2) 00:14:48.807 2.619 - 2.631: 98.5607% ( 2) 00:14:48.807 2.631 - 2.643: 98.5759% ( 2) 00:14:48.807 2.643 - 2.655: 98.5987% ( 3) 00:14:48.807 2.655 - 2.667: 98.6140% ( 2) 00:14:48.807 2.690 - 2.702: 98.6216% ( 1) 00:14:48.807 2.702 - 2.714: 98.6292% ( 1) 00:14:48.807 2.714 - 2.726: 98.6368% ( 1) 00:14:48.807 2.750 - 2.761: 98.6444% ( 1) 00:14:48.807 2.761 - 2.773: 98.6520% ( 1) 00:14:48.807 2.797 - 2.809: 98.6597% ( 1) 00:14:48.807 2.821 - 2.833: 98.6673% ( 1) 00:14:48.807 2.844 - 2.856: 98.6749% ( 1) 00:14:48.807 2.939 - 2.951: 98.6825% ( 1) 00:14:48.807 2.963 - 2.975: 98.6901% ( 1) 00:14:48.807 3.176 - 3.200: 98.6977% ( 1) 00:14:48.807 3.484 - 3.508: 98.7130% ( 2) 00:14:48.807 3.508 - 3.532: 98.7434% ( 4) 00:14:48.807 3.532 - 3.556: 98.7663% ( 3) 00:14:48.807 3.674 - 3.698: 98.7739% ( 1) 00:14:48.807 3.721 - 3.745: 98.7815% ( 1) 00:14:48.807 3.769 - 3.793: 98.7891% ( 1) 00:14:48.807 3.864 - 3.887: 98.7967% ( 1) 00:14:48.807 3.887 - 3.911: 98.8044% ( 1) 00:14:48.807 4.006 - 4.030: 98.8120% ( 1) 00:14:48.807 4.077 - 4.101: 98.8196% ( 1) 00:14:48.807 4.219 - 4.243: 98.8348% ( 2) 00:14:48.807 4.433 - 4.456: 98.8424% ( 1) 00:14:48.807 6.116 - 6.163: 98.8500% ( 1) 00:14:48.807 6.258 - 6.305: 98.8729% ( 3) 00:14:48.807 6.305 - 6.353: 98.8805% ( 1) 00:14:48.807 6.684 - 6.732: 98.8881% ( 1) 00:14:48.807 6.732 - 6.779: 98.9034% ( 2) 00:14:48.807 6.827 - 6.874: 9[2024-07-25 09:29:21.501162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.064 8.9110% ( 1) 00:14:49.065 7.159 - 7.206: 98.9186% ( 1) 00:14:49.065 7.206 - 7.253: 98.9262% ( 1) 00:14:49.065 7.253 - 7.301: 98.9338% ( 1) 00:14:49.065 7.348 - 7.396: 98.9491% ( 2) 00:14:49.065 7.538 - 7.585: 98.9567% ( 1) 00:14:49.065 7.585 - 7.633: 98.9643% ( 1) 00:14:49.065 7.727 - 7.775: 98.9719% ( 1) 00:14:49.065 8.439 - 8.486: 98.9795% ( 1) 00:14:49.065 9.150 - 9.197: 98.9871% ( 1) 00:14:49.065 10.619 - 10.667: 98.9947% ( 1) 00:14:49.065 15.929 - 16.024: 99.0024% ( 1) 00:14:49.065 16.024 - 16.119: 99.0100% ( 1) 00:14:49.065 16.119 - 16.213: 99.0633% ( 7) 00:14:49.065 16.308 - 16.403: 99.0937% ( 4) 00:14:49.065 16.403 - 16.498: 99.1242% ( 4) 00:14:49.065 16.498 - 16.593: 99.1851% ( 8) 00:14:49.065 16.593 - 16.687: 99.2004% ( 2) 00:14:49.065 16.687 - 16.782: 99.2156% ( 2) 00:14:49.065 16.782 - 16.877: 99.2232% ( 1) 00:14:49.065 16.877 - 16.972: 99.2384% ( 2) 00:14:49.065 16.972 - 17.067: 99.2461% ( 1) 00:14:49.065 17.161 - 17.256: 99.2765% ( 4) 00:14:49.065 17.256 - 17.351: 99.2994% ( 3) 00:14:49.065 17.351 - 17.446: 99.3146% ( 2) 00:14:49.065 17.541 - 17.636: 99.3222% ( 1) 00:14:49.065 17.825 - 17.920: 99.3298% ( 1) 00:14:49.065 18.110 - 18.204: 99.3374% ( 1) 00:14:49.065 18.204 - 18.299: 99.3527% ( 2) 00:14:49.065 18.299 - 18.394: 99.3603% ( 1) 00:14:49.065 24.178 - 24.273: 99.3679% ( 1) 00:14:49.065 29.772 - 29.961: 99.3755% ( 1) 00:14:49.065 30.151 - 30.341: 99.3831% ( 1) 00:14:49.065 3980.705 - 4004.978: 99.8325% ( 59) 00:14:49.065 4004.978 - 4029.250: 99.9924% ( 21) 00:14:49.065 5995.330 - 6019.603: 100.0000% ( 1) 00:14:49.065 00:14:49.065 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:49.065 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:49.065 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:49.065 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:49.065 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:49.322 [ 00:14:49.323 { 00:14:49.323 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:49.323 "subtype": "Discovery", 00:14:49.323 "listen_addresses": [], 00:14:49.323 "allow_any_host": true, 00:14:49.323 "hosts": [] 00:14:49.323 }, 00:14:49.323 { 00:14:49.323 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:49.323 "subtype": "NVMe", 00:14:49.323 "listen_addresses": [ 00:14:49.323 { 00:14:49.323 "trtype": "VFIOUSER", 00:14:49.323 "adrfam": "IPv4", 00:14:49.323 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:49.323 "trsvcid": "0" 00:14:49.323 } 00:14:49.323 ], 00:14:49.323 "allow_any_host": true, 00:14:49.323 "hosts": [], 00:14:49.323 "serial_number": "SPDK1", 00:14:49.323 "model_number": "SPDK bdev Controller", 00:14:49.323 "max_namespaces": 32, 00:14:49.323 "min_cntlid": 1, 00:14:49.323 "max_cntlid": 65519, 00:14:49.323 "namespaces": [ 00:14:49.323 { 00:14:49.323 "nsid": 1, 00:14:49.323 "bdev_name": "Malloc1", 00:14:49.323 "name": "Malloc1", 00:14:49.323 "nguid": "44F3A138EC394266BEBCDA035D441F8D", 00:14:49.323 "uuid": "44f3a138-ec39-4266-bebc-da035d441f8d" 00:14:49.323 }, 00:14:49.323 { 00:14:49.323 "nsid": 2, 00:14:49.323 "bdev_name": "Malloc3", 00:14:49.323 "name": "Malloc3", 00:14:49.323 "nguid": "E7D2D2E46542451596D8462529FE9C84", 00:14:49.323 "uuid": "e7d2d2e4-6542-4515-96d8-462529fe9c84" 00:14:49.323 } 00:14:49.323 ] 00:14:49.323 }, 00:14:49.323 { 00:14:49.323 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:49.323 "subtype": "NVMe", 00:14:49.323 "listen_addresses": [ 00:14:49.323 { 00:14:49.323 "trtype": "VFIOUSER", 00:14:49.323 "adrfam": "IPv4", 00:14:49.323 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:49.323 "trsvcid": "0" 00:14:49.323 } 00:14:49.323 ], 00:14:49.323 "allow_any_host": true, 00:14:49.323 "hosts": [], 00:14:49.323 "serial_number": "SPDK2", 00:14:49.323 "model_number": "SPDK bdev Controller", 00:14:49.323 "max_namespaces": 32, 00:14:49.323 "min_cntlid": 1, 00:14:49.323 "max_cntlid": 65519, 00:14:49.323 "namespaces": [ 00:14:49.323 { 00:14:49.323 "nsid": 1, 00:14:49.323 "bdev_name": "Malloc2", 00:14:49.323 "name": "Malloc2", 00:14:49.323 "nguid": "B8038D2F63744A218E81D06BF0E51759", 00:14:49.323 "uuid": "b8038d2f-6374-4a21-8e81-d06bf0e51759" 00:14:49.323 } 00:14:49.323 ] 00:14:49.323 } 00:14:49.323 ] 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=505563 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # i=1 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # sleep 0.1 00:14:49.323 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # i=2 00:14:49.323 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # sleep 0.1 00:14:49.323 [2024-07-25 09:29:22.009898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:49.323 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.323 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.323 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:14:49.323 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:49.580 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:49.838 Malloc4 00:14:49.838 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:50.096 [2024-07-25 09:29:22.589235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.096 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:50.096 Asynchronous Event Request test 00:14:50.096 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.096 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.096 Registering asynchronous event callbacks... 00:14:50.096 Starting namespace attribute notice tests for all controllers... 00:14:50.096 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:50.096 aer_cb - Changed Namespace 00:14:50.096 Cleaning up... 00:14:50.354 [ 00:14:50.354 { 00:14:50.354 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:50.354 "subtype": "Discovery", 00:14:50.354 "listen_addresses": [], 00:14:50.354 "allow_any_host": true, 00:14:50.354 "hosts": [] 00:14:50.354 }, 00:14:50.354 { 00:14:50.354 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:50.354 "subtype": "NVMe", 00:14:50.354 "listen_addresses": [ 00:14:50.354 { 00:14:50.354 "trtype": "VFIOUSER", 00:14:50.354 "adrfam": "IPv4", 00:14:50.354 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:50.354 "trsvcid": "0" 00:14:50.354 } 00:14:50.354 ], 00:14:50.354 "allow_any_host": true, 00:14:50.354 "hosts": [], 00:14:50.354 "serial_number": "SPDK1", 00:14:50.354 "model_number": "SPDK bdev Controller", 00:14:50.354 "max_namespaces": 32, 00:14:50.354 "min_cntlid": 1, 00:14:50.354 "max_cntlid": 65519, 00:14:50.354 "namespaces": [ 00:14:50.354 { 00:14:50.354 "nsid": 1, 00:14:50.354 "bdev_name": "Malloc1", 00:14:50.354 "name": "Malloc1", 00:14:50.354 "nguid": "44F3A138EC394266BEBCDA035D441F8D", 00:14:50.354 "uuid": "44f3a138-ec39-4266-bebc-da035d441f8d" 00:14:50.354 }, 00:14:50.354 { 00:14:50.354 "nsid": 2, 00:14:50.354 "bdev_name": "Malloc3", 00:14:50.354 "name": "Malloc3", 00:14:50.354 "nguid": "E7D2D2E46542451596D8462529FE9C84", 00:14:50.354 "uuid": "e7d2d2e4-6542-4515-96d8-462529fe9c84" 00:14:50.354 } 00:14:50.354 ] 00:14:50.354 }, 00:14:50.354 { 00:14:50.354 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:50.354 "subtype": "NVMe", 00:14:50.354 "listen_addresses": [ 00:14:50.354 { 00:14:50.354 "trtype": "VFIOUSER", 00:14:50.354 "adrfam": "IPv4", 00:14:50.354 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:50.354 "trsvcid": "0" 00:14:50.354 } 00:14:50.354 ], 00:14:50.354 "allow_any_host": true, 00:14:50.354 "hosts": [], 00:14:50.354 "serial_number": "SPDK2", 00:14:50.354 "model_number": "SPDK bdev Controller", 00:14:50.354 "max_namespaces": 32, 00:14:50.354 "min_cntlid": 1, 00:14:50.354 "max_cntlid": 65519, 00:14:50.354 "namespaces": [ 00:14:50.354 { 00:14:50.354 "nsid": 1, 00:14:50.354 "bdev_name": "Malloc2", 00:14:50.354 "name": "Malloc2", 00:14:50.354 "nguid": "B8038D2F63744A218E81D06BF0E51759", 00:14:50.354 "uuid": "b8038d2f-6374-4a21-8e81-d06bf0e51759" 00:14:50.354 }, 00:14:50.354 { 00:14:50.354 "nsid": 2, 00:14:50.354 "bdev_name": "Malloc4", 00:14:50.354 "name": "Malloc4", 00:14:50.354 "nguid": "CE61329145004FEC8ADF8FF91F1D46BD", 00:14:50.354 "uuid": "ce613291-4500-4fec-8adf-8ff91f1d46bd" 00:14:50.354 } 00:14:50.354 ] 00:14:50.354 } 00:14:50.354 ] 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 505563 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 499840 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 499840 ']' 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 499840 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 499840 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 499840' 00:14:50.354 killing process with pid 499840 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 499840 00:14:50.354 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 499840 00:14:50.613 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:50.613 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:50.613 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:50.613 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:50.613 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:50.613 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=505709 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 505709' 00:14:50.614 Process pid: 505709 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 505709 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 505709 ']' 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.614 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:50.614 [2024-07-25 09:29:23.331305] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:50.614 [2024-07-25 09:29:23.332327] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:14:50.614 [2024-07-25 09:29:23.332412] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.871 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.871 [2024-07-25 09:29:23.395295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.871 [2024-07-25 09:29:23.510476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.871 [2024-07-25 09:29:23.510538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.871 [2024-07-25 09:29:23.510555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.871 [2024-07-25 09:29:23.510569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.871 [2024-07-25 09:29:23.510580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.871 [2024-07-25 09:29:23.510682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.871 [2024-07-25 09:29:23.510739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.871 [2024-07-25 09:29:23.510856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.871 [2024-07-25 09:29:23.510859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.128 [2024-07-25 09:29:23.623857] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:51.128 [2024-07-25 09:29:23.624111] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:51.128 [2024-07-25 09:29:23.624388] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:51.128 [2024-07-25 09:29:23.624934] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:51.128 [2024-07-25 09:29:23.625166] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:51.693 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.693 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:51.693 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:52.626 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:52.884 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:52.885 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:52.885 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.885 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:52.885 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:53.143 Malloc1 00:14:53.144 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:53.402 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:53.659 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:53.917 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.917 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:53.917 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.176 Malloc2 00:14:54.176 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:54.434 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:54.692 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 505709 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 505709 ']' 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 505709 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 505709 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 505709' 00:14:54.951 killing process with pid 505709 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 505709 00:14:54.951 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 505709 00:14:55.519 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:55.519 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:55.519 00:14:55.519 real 0m54.142s 00:14:55.519 user 3m33.417s 00:14:55.519 sys 0m4.642s 00:14:55.519 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.519 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.519 ************************************ 00:14:55.520 END TEST nvmf_vfio_user 00:14:55.520 ************************************ 00:14:55.520 09:29:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:55.520 09:29:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.520 09:29:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.520 09:29:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.520 ************************************ 00:14:55.520 START TEST nvmf_vfio_user_nvme_compliance 00:14:55.520 ************************************ 00:14:55.520 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:55.520 * Looking for test storage... 00:14:55.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=506320 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 506320' 00:14:55.520 Process pid: 506320 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 506320 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 506320 ']' 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.520 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:55.520 [2024-07-25 09:29:28.105451] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:14:55.520 [2024-07-25 09:29:28.105539] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.520 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.520 [2024-07-25 09:29:28.166108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:55.779 [2024-07-25 09:29:28.275196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.779 [2024-07-25 09:29:28.275245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.779 [2024-07-25 09:29:28.275274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.779 [2024-07-25 09:29:28.275286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.779 [2024-07-25 09:29:28.275296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.779 [2024-07-25 09:29:28.278377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.779 [2024-07-25 09:29:28.278443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.779 [2024-07-25 09:29:28.278446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.779 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.779 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:55.779 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.714 malloc0 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.714 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.973 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.973 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:56.973 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.973 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.973 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.973 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:56.973 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.973 00:14:56.973 00:14:56.973 CUnit - A unit testing framework for C - Version 2.1-3 00:14:56.973 http://cunit.sourceforge.net/ 00:14:56.973 00:14:56.973 00:14:56.973 Suite: nvme_compliance 00:14:56.973 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 09:29:29.633518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.973 [2024-07-25 09:29:29.634975] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:56.973 [2024-07-25 09:29:29.634999] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:56.973 [2024-07-25 09:29:29.635026] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:56.973 [2024-07-25 09:29:29.637546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.973 passed 00:14:57.231 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 09:29:29.723107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.231 [2024-07-25 09:29:29.726127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.231 passed 00:14:57.231 Test: admin_identify_ns ...[2024-07-25 09:29:29.811814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.231 [2024-07-25 09:29:29.875372] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:57.231 [2024-07-25 09:29:29.883376] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:57.231 [2024-07-25 09:29:29.904500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.231 passed 00:14:57.490 Test: admin_get_features_mandatory_features ...[2024-07-25 09:29:29.986396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.490 [2024-07-25 09:29:29.989415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.490 passed 00:14:57.490 Test: admin_get_features_optional_features ...[2024-07-25 09:29:30.074971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.490 [2024-07-25 09:29:30.077999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.490 passed 00:14:57.490 Test: admin_set_features_number_of_queues ...[2024-07-25 09:29:30.160193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.747 [2024-07-25 09:29:30.264523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.747 passed 00:14:57.747 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 09:29:30.350628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.747 [2024-07-25 09:29:30.353653] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.747 passed 00:14:57.747 Test: admin_get_log_page_with_lpo ...[2024-07-25 09:29:30.435962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.005 [2024-07-25 09:29:30.503377] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:58.005 [2024-07-25 09:29:30.516448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.005 passed 00:14:58.005 Test: fabric_property_get ...[2024-07-25 09:29:30.600305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.005 [2024-07-25 09:29:30.601633] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:58.005 [2024-07-25 09:29:30.603325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.005 passed 00:14:58.005 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 09:29:30.687907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.005 [2024-07-25 09:29:30.689213] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:58.005 [2024-07-25 09:29:30.690927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.005 passed 00:14:58.262 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 09:29:30.774022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.262 [2024-07-25 09:29:30.857382] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.262 [2024-07-25 09:29:30.873364] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.262 [2024-07-25 09:29:30.878480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.262 passed 00:14:58.262 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 09:29:30.963551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.262 [2024-07-25 09:29:30.964868] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:58.262 [2024-07-25 09:29:30.966576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.520 passed 00:14:58.520 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 09:29:31.049629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.520 [2024-07-25 09:29:31.125365] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:58.520 [2024-07-25 09:29:31.149365] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.520 [2024-07-25 09:29:31.154487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.520 passed 00:14:58.520 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 09:29:31.236984] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.520 [2024-07-25 09:29:31.238276] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:58.520 [2024-07-25 09:29:31.238328] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:58.520 [2024-07-25 09:29:31.240003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.779 passed 00:14:58.779 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 09:29:31.323196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.779 [2024-07-25 09:29:31.414382] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:58.779 [2024-07-25 09:29:31.422367] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:58.779 [2024-07-25 09:29:31.430380] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:58.779 [2024-07-25 09:29:31.438369] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:58.779 [2024-07-25 09:29:31.467474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.779 passed 00:14:59.036 Test: admin_create_io_sq_verify_pc ...[2024-07-25 09:29:31.551561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.036 [2024-07-25 09:29:31.569382] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:59.036 [2024-07-25 09:29:31.586985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.036 passed 00:14:59.036 Test: admin_create_io_qp_max_qps ...[2024-07-25 09:29:31.669577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.407 [2024-07-25 09:29:32.778372] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:00.665 [2024-07-25 09:29:33.159551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.665 passed 00:15:00.665 Test: admin_create_io_sq_shared_cq ...[2024-07-25 09:29:33.242772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.665 [2024-07-25 09:29:33.378368] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:00.922 [2024-07-25 09:29:33.415452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.922 passed 00:15:00.922 00:15:00.922 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.922 suites 1 1 n/a 0 0 00:15:00.922 tests 18 18 18 0 0 00:15:00.922 asserts 360 360 360 0 n/a 00:15:00.922 00:15:00.922 Elapsed time = 1.568 seconds 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 506320 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 506320 ']' 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 506320 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 506320 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 506320' 00:15:00.922 killing process with pid 506320 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 506320 00:15:00.922 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 506320 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:01.180 00:15:01.180 real 0m5.806s 00:15:01.180 user 0m16.226s 00:15:01.180 sys 0m0.568s 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 ************************************ 00:15:01.180 END TEST nvmf_vfio_user_nvme_compliance 00:15:01.180 ************************************ 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 ************************************ 00:15:01.180 START TEST nvmf_vfio_user_fuzz 00:15:01.180 ************************************ 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.180 * Looking for test storage... 00:15:01.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.180 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:01.439 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=507041 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 507041' 00:15:01.440 Process pid: 507041 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 507041 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 507041 ']' 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.440 09:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:01.698 09:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.698 09:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:01.698 09:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.630 malloc0 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:02.630 09:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:34.689 Fuzzing completed. Shutting down the fuzz application 00:15:34.689 00:15:34.689 Dumping successful admin opcodes: 00:15:34.689 8, 9, 10, 24, 00:15:34.689 Dumping successful io opcodes: 00:15:34.689 0, 00:15:34.689 NS: 0x200003a1ef00 I/O qp, Total commands completed: 663072, total successful commands: 2589, random_seed: 2100439616 00:15:34.689 NS: 0x200003a1ef00 admin qp, Total commands completed: 86060, total successful commands: 690, random_seed: 2530873408 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 507041 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 507041 ']' 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 507041 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 507041 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 507041' 00:15:34.689 killing process with pid 507041 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 507041 00:15:34.689 09:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 507041 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:34.689 00:15:34.689 real 0m32.354s 00:15:34.689 user 0m33.382s 00:15:34.689 sys 0m26.558s 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.689 ************************************ 00:15:34.689 END TEST nvmf_vfio_user_fuzz 00:15:34.689 ************************************ 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.689 ************************************ 00:15:34.689 START TEST nvmf_auth_target 00:15:34.689 ************************************ 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:34.689 * Looking for test storage... 00:15:34.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.689 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.690 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:15:35.624 Found 0000:82:00.0 (0x8086 - 0x159b) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:15:35.624 Found 0000:82:00.1 (0x8086 - 0x159b) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:15:35.624 Found net devices under 0000:82:00.0: cvl_0_0 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.624 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:15:35.625 Found net devices under 0000:82:00.1: cvl_0_1 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.625 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:15:35.883 00:15:35.883 --- 10.0.0.2 ping statistics --- 00:15:35.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.883 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:15:35.883 00:15:35.883 --- 10.0.0.1 ping statistics --- 00:15:35.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.883 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=512937 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 512937 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 512937 ']' 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.883 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=513013 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f24752fd7384385fa630d58906455dcb180976a5bcb71697 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ESJ 00:15:36.141 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f24752fd7384385fa630d58906455dcb180976a5bcb71697 0 00:15:36.142 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f24752fd7384385fa630d58906455dcb180976a5bcb71697 0 00:15:36.142 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.142 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.142 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f24752fd7384385fa630d58906455dcb180976a5bcb71697 00:15:36.142 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:36.142 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ESJ 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ESJ 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ESJ 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7fbcef1ac511c04f1bfd4f329c71c98978c9cf1c25bb502d500dcb00958511b9 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.x0v 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7fbcef1ac511c04f1bfd4f329c71c98978c9cf1c25bb502d500dcb00958511b9 3 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7fbcef1ac511c04f1bfd4f329c71c98978c9cf1c25bb502d500dcb00958511b9 3 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7fbcef1ac511c04f1bfd4f329c71c98978c9cf1c25bb502d500dcb00958511b9 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.x0v 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.x0v 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.x0v 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=caa91e3cfab011620b255cfcc68bd6cc 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cfi 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key caa91e3cfab011620b255cfcc68bd6cc 1 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 caa91e3cfab011620b255cfcc68bd6cc 1 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=caa91e3cfab011620b255cfcc68bd6cc 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:36.400 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cfi 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cfi 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.cfi 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bcedd1b8c02e63fc975ce15a6206b12e85a7c843a62a87e2 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Wzk 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bcedd1b8c02e63fc975ce15a6206b12e85a7c843a62a87e2 2 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bcedd1b8c02e63fc975ce15a6206b12e85a7c843a62a87e2 2 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bcedd1b8c02e63fc975ce15a6206b12e85a7c843a62a87e2 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Wzk 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Wzk 00:15:36.400 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Wzk 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f9ad64f4306e7074d2c1a26b12ea18b42d15145bbdf9ec71 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YaY 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f9ad64f4306e7074d2c1a26b12ea18b42d15145bbdf9ec71 2 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f9ad64f4306e7074d2c1a26b12ea18b42d15145bbdf9ec71 2 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f9ad64f4306e7074d2c1a26b12ea18b42d15145bbdf9ec71 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YaY 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YaY 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.YaY 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8b6c0a8cff3c9fc09d8ccc8507fe7d5a 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0Ik 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8b6c0a8cff3c9fc09d8ccc8507fe7d5a 1 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8b6c0a8cff3c9fc09d8ccc8507fe7d5a 1 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8b6c0a8cff3c9fc09d8ccc8507fe7d5a 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:36.401 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0Ik 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0Ik 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.0Ik 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=efddded4bc75d4028717a40ee8ba78b5cd1e83adbbb35c8a3ab884ce84527e55 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Wrg 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key efddded4bc75d4028717a40ee8ba78b5cd1e83adbbb35c8a3ab884ce84527e55 3 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 efddded4bc75d4028717a40ee8ba78b5cd1e83adbbb35c8a3ab884ce84527e55 3 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=efddded4bc75d4028717a40ee8ba78b5cd1e83adbbb35c8a3ab884ce84527e55 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Wrg 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Wrg 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Wrg 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 512937 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 512937 ']' 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.659 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 513013 /var/tmp/host.sock 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 513013 ']' 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:36.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.917 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ESJ 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ESJ 00:15:37.175 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ESJ 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.x0v ]] 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x0v 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x0v 00:15:37.433 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x0v 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cfi 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cfi 00:15:37.691 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cfi 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Wzk ]] 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wzk 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wzk 00:15:37.947 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wzk 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YaY 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YaY 00:15:38.204 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YaY 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.0Ik ]] 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Ik 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Ik 00:15:38.460 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Ik 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Wrg 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Wrg 00:15:38.718 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Wrg 00:15:38.975 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:38.975 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:38.975 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.975 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.975 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.975 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.233 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.491 00:15:39.491 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.491 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.491 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.749 { 00:15:39.749 "cntlid": 1, 00:15:39.749 "qid": 0, 00:15:39.749 "state": "enabled", 00:15:39.749 "thread": "nvmf_tgt_poll_group_000", 00:15:39.749 "listen_address": { 00:15:39.749 "trtype": "TCP", 00:15:39.749 "adrfam": "IPv4", 00:15:39.749 "traddr": "10.0.0.2", 00:15:39.749 "trsvcid": "4420" 00:15:39.749 }, 00:15:39.749 "peer_address": { 00:15:39.749 "trtype": "TCP", 00:15:39.749 "adrfam": "IPv4", 00:15:39.749 "traddr": "10.0.0.1", 00:15:39.749 "trsvcid": "40496" 00:15:39.749 }, 00:15:39.749 "auth": { 00:15:39.749 "state": "completed", 00:15:39.749 "digest": "sha256", 00:15:39.749 "dhgroup": "null" 00:15:39.749 } 00:15:39.749 } 00:15:39.749 ]' 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.749 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.006 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.006 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.006 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.263 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.520 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.520 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.520 { 00:15:45.520 "cntlid": 3, 00:15:45.520 "qid": 0, 00:15:45.520 "state": "enabled", 00:15:45.520 "thread": "nvmf_tgt_poll_group_000", 00:15:45.520 "listen_address": { 00:15:45.520 "trtype": "TCP", 00:15:45.520 "adrfam": "IPv4", 00:15:45.520 "traddr": "10.0.0.2", 00:15:45.520 "trsvcid": "4420" 00:15:45.520 }, 00:15:45.520 "peer_address": { 00:15:45.520 "trtype": "TCP", 00:15:45.520 "adrfam": "IPv4", 00:15:45.520 "traddr": "10.0.0.1", 00:15:45.520 "trsvcid": "40522" 00:15:45.520 }, 00:15:45.520 "auth": { 00:15:45.520 "state": "completed", 00:15:45.520 "digest": "sha256", 00:15:45.520 "dhgroup": "null" 00:15:45.520 } 00:15:45.520 } 00:15:45.520 ]' 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.520 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.778 09:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.150 09:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.409 00:15:47.409 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.409 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.409 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.667 { 00:15:47.667 "cntlid": 5, 00:15:47.667 "qid": 0, 00:15:47.667 "state": "enabled", 00:15:47.667 "thread": "nvmf_tgt_poll_group_000", 00:15:47.667 "listen_address": { 00:15:47.667 "trtype": "TCP", 00:15:47.667 "adrfam": "IPv4", 00:15:47.667 "traddr": "10.0.0.2", 00:15:47.667 "trsvcid": "4420" 00:15:47.667 }, 00:15:47.667 "peer_address": { 00:15:47.667 "trtype": "TCP", 00:15:47.667 "adrfam": "IPv4", 00:15:47.667 "traddr": "10.0.0.1", 00:15:47.667 "trsvcid": "40544" 00:15:47.667 }, 00:15:47.667 "auth": { 00:15:47.667 "state": "completed", 00:15:47.667 "digest": "sha256", 00:15:47.667 "dhgroup": "null" 00:15:47.667 } 00:15:47.667 } 00:15:47.667 ]' 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.667 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.925 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.925 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.925 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.182 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.118 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.376 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.634 00:15:49.634 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.634 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.634 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.892 { 00:15:49.892 "cntlid": 7, 00:15:49.892 "qid": 0, 00:15:49.892 "state": "enabled", 00:15:49.892 "thread": "nvmf_tgt_poll_group_000", 00:15:49.892 "listen_address": { 00:15:49.892 "trtype": "TCP", 00:15:49.892 "adrfam": "IPv4", 00:15:49.892 "traddr": "10.0.0.2", 00:15:49.892 "trsvcid": "4420" 00:15:49.892 }, 00:15:49.892 "peer_address": { 00:15:49.892 "trtype": "TCP", 00:15:49.892 "adrfam": "IPv4", 00:15:49.892 "traddr": "10.0.0.1", 00:15:49.892 "trsvcid": "50918" 00:15:49.892 }, 00:15:49.892 "auth": { 00:15:49.892 "state": "completed", 00:15:49.892 "digest": "sha256", 00:15:49.892 "dhgroup": "null" 00:15:49.892 } 00:15:49.892 } 00:15:49.892 ]' 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.892 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.150 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.082 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.340 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.903 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.903 { 00:15:51.903 "cntlid": 9, 00:15:51.903 "qid": 0, 00:15:51.903 "state": "enabled", 00:15:51.903 "thread": "nvmf_tgt_poll_group_000", 00:15:51.903 "listen_address": { 00:15:51.903 "trtype": "TCP", 00:15:51.903 "adrfam": "IPv4", 00:15:51.903 "traddr": "10.0.0.2", 00:15:51.903 "trsvcid": "4420" 00:15:51.903 }, 00:15:51.903 "peer_address": { 00:15:51.903 "trtype": "TCP", 00:15:51.903 "adrfam": "IPv4", 00:15:51.903 "traddr": "10.0.0.1", 00:15:51.903 "trsvcid": "50940" 00:15:51.903 }, 00:15:51.903 "auth": { 00:15:51.903 "state": "completed", 00:15:51.903 "digest": "sha256", 00:15:51.903 "dhgroup": "ffdhe2048" 00:15:51.903 } 00:15:51.903 } 00:15:51.903 ]' 00:15:51.903 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.159 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.416 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.348 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.605 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:53.605 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.605 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.605 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.605 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.605 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.606 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.606 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.606 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.606 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.606 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.606 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.863 00:15:53.863 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.863 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.863 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.120 { 00:15:54.120 "cntlid": 11, 00:15:54.120 "qid": 0, 00:15:54.120 "state": "enabled", 00:15:54.120 "thread": "nvmf_tgt_poll_group_000", 00:15:54.120 "listen_address": { 00:15:54.120 "trtype": "TCP", 00:15:54.120 "adrfam": "IPv4", 00:15:54.120 "traddr": "10.0.0.2", 00:15:54.120 "trsvcid": "4420" 00:15:54.120 }, 00:15:54.120 "peer_address": { 00:15:54.120 "trtype": "TCP", 00:15:54.120 "adrfam": "IPv4", 00:15:54.120 "traddr": "10.0.0.1", 00:15:54.120 "trsvcid": "50970" 00:15:54.120 }, 00:15:54.120 "auth": { 00:15:54.120 "state": "completed", 00:15:54.120 "digest": "sha256", 00:15:54.120 "dhgroup": "ffdhe2048" 00:15:54.120 } 00:15:54.120 } 00:15:54.120 ]' 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.120 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.377 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.377 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.377 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.635 09:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.566 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.823 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.080 00:15:56.080 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.080 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.080 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.337 { 00:15:56.337 "cntlid": 13, 00:15:56.337 "qid": 0, 00:15:56.337 "state": "enabled", 00:15:56.337 "thread": "nvmf_tgt_poll_group_000", 00:15:56.337 "listen_address": { 00:15:56.337 "trtype": "TCP", 00:15:56.337 "adrfam": "IPv4", 00:15:56.337 "traddr": "10.0.0.2", 00:15:56.337 "trsvcid": "4420" 00:15:56.337 }, 00:15:56.337 "peer_address": { 00:15:56.337 "trtype": "TCP", 00:15:56.337 "adrfam": "IPv4", 00:15:56.337 "traddr": "10.0.0.1", 00:15:56.337 "trsvcid": "51006" 00:15:56.337 }, 00:15:56.337 "auth": { 00:15:56.337 "state": "completed", 00:15:56.337 "digest": "sha256", 00:15:56.337 "dhgroup": "ffdhe2048" 00:15:56.337 } 00:15:56.337 } 00:15:56.337 ]' 00:15:56.337 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.337 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.337 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.337 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.337 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.594 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.594 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.594 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.851 09:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.783 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.040 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.041 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.298 00:15:58.298 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.298 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.298 09:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.555 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.555 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.555 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.555 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.555 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.555 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.555 { 00:15:58.555 "cntlid": 15, 00:15:58.555 "qid": 0, 00:15:58.555 "state": "enabled", 00:15:58.555 "thread": "nvmf_tgt_poll_group_000", 00:15:58.555 "listen_address": { 00:15:58.555 "trtype": "TCP", 00:15:58.555 "adrfam": "IPv4", 00:15:58.555 "traddr": "10.0.0.2", 00:15:58.555 "trsvcid": "4420" 00:15:58.555 }, 00:15:58.555 "peer_address": { 00:15:58.555 "trtype": "TCP", 00:15:58.555 "adrfam": "IPv4", 00:15:58.555 "traddr": "10.0.0.1", 00:15:58.555 "trsvcid": "36954" 00:15:58.555 }, 00:15:58.555 "auth": { 00:15:58.555 "state": "completed", 00:15:58.556 "digest": "sha256", 00:15:58.556 "dhgroup": "ffdhe2048" 00:15:58.556 } 00:15:58.556 } 00:15:58.556 ]' 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.556 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.813 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:15:59.745 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.745 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:59.745 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.745 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.002 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.002 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.002 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.002 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.002 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.260 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.518 00:16:00.518 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.518 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.518 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.784 { 00:16:00.784 "cntlid": 17, 00:16:00.784 "qid": 0, 00:16:00.784 "state": "enabled", 00:16:00.784 "thread": "nvmf_tgt_poll_group_000", 00:16:00.784 "listen_address": { 00:16:00.784 "trtype": "TCP", 00:16:00.784 "adrfam": "IPv4", 00:16:00.784 "traddr": "10.0.0.2", 00:16:00.784 "trsvcid": "4420" 00:16:00.784 }, 00:16:00.784 "peer_address": { 00:16:00.784 "trtype": "TCP", 00:16:00.784 "adrfam": "IPv4", 00:16:00.784 "traddr": "10.0.0.1", 00:16:00.784 "trsvcid": "36982" 00:16:00.784 }, 00:16:00.784 "auth": { 00:16:00.784 "state": "completed", 00:16:00.784 "digest": "sha256", 00:16:00.784 "dhgroup": "ffdhe3072" 00:16:00.784 } 00:16:00.784 } 00:16:00.784 ]' 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.784 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.042 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.415 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.415 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.673 00:16:02.673 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.673 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.673 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.930 { 00:16:02.930 "cntlid": 19, 00:16:02.930 "qid": 0, 00:16:02.930 "state": "enabled", 00:16:02.930 "thread": "nvmf_tgt_poll_group_000", 00:16:02.930 "listen_address": { 00:16:02.930 "trtype": "TCP", 00:16:02.930 "adrfam": "IPv4", 00:16:02.930 "traddr": "10.0.0.2", 00:16:02.930 "trsvcid": "4420" 00:16:02.930 }, 00:16:02.930 "peer_address": { 00:16:02.930 "trtype": "TCP", 00:16:02.930 "adrfam": "IPv4", 00:16:02.930 "traddr": "10.0.0.1", 00:16:02.930 "trsvcid": "36998" 00:16:02.930 }, 00:16:02.930 "auth": { 00:16:02.930 "state": "completed", 00:16:02.930 "digest": "sha256", 00:16:02.930 "dhgroup": "ffdhe3072" 00:16:02.930 } 00:16:02.930 } 00:16:02.930 ]' 00:16:02.930 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.188 09:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.446 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.380 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.643 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.956 00:16:04.956 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.956 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.956 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.248 { 00:16:05.248 "cntlid": 21, 00:16:05.248 "qid": 0, 00:16:05.248 "state": "enabled", 00:16:05.248 "thread": "nvmf_tgt_poll_group_000", 00:16:05.248 "listen_address": { 00:16:05.248 "trtype": "TCP", 00:16:05.248 "adrfam": "IPv4", 00:16:05.248 "traddr": "10.0.0.2", 00:16:05.248 "trsvcid": "4420" 00:16:05.248 }, 00:16:05.248 "peer_address": { 00:16:05.248 "trtype": "TCP", 00:16:05.248 "adrfam": "IPv4", 00:16:05.248 "traddr": "10.0.0.1", 00:16:05.248 "trsvcid": "37010" 00:16:05.248 }, 00:16:05.248 "auth": { 00:16:05.248 "state": "completed", 00:16:05.248 "digest": "sha256", 00:16:05.248 "dhgroup": "ffdhe3072" 00:16:05.248 } 00:16:05.248 } 00:16:05.248 ]' 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.248 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.531 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.531 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.531 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.531 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.531 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.788 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:06.720 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.978 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.236 00:16:07.236 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.236 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.236 09:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.800 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.801 { 00:16:07.801 "cntlid": 23, 00:16:07.801 "qid": 0, 00:16:07.801 "state": "enabled", 00:16:07.801 "thread": "nvmf_tgt_poll_group_000", 00:16:07.801 "listen_address": { 00:16:07.801 "trtype": "TCP", 00:16:07.801 "adrfam": "IPv4", 00:16:07.801 "traddr": "10.0.0.2", 00:16:07.801 "trsvcid": "4420" 00:16:07.801 }, 00:16:07.801 "peer_address": { 00:16:07.801 "trtype": "TCP", 00:16:07.801 "adrfam": "IPv4", 00:16:07.801 "traddr": "10.0.0.1", 00:16:07.801 "trsvcid": "37050" 00:16:07.801 }, 00:16:07.801 "auth": { 00:16:07.801 "state": "completed", 00:16:07.801 "digest": "sha256", 00:16:07.801 "dhgroup": "ffdhe3072" 00:16:07.801 } 00:16:07.801 } 00:16:07.801 ]' 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.801 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.059 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.991 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.249 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.815 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.815 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.073 { 00:16:10.073 "cntlid": 25, 00:16:10.073 "qid": 0, 00:16:10.073 "state": "enabled", 00:16:10.073 "thread": "nvmf_tgt_poll_group_000", 00:16:10.073 "listen_address": { 00:16:10.073 "trtype": "TCP", 00:16:10.073 "adrfam": "IPv4", 00:16:10.073 "traddr": "10.0.0.2", 00:16:10.073 "trsvcid": "4420" 00:16:10.073 }, 00:16:10.073 "peer_address": { 00:16:10.073 "trtype": "TCP", 00:16:10.073 "adrfam": "IPv4", 00:16:10.073 "traddr": "10.0.0.1", 00:16:10.073 "trsvcid": "54998" 00:16:10.073 }, 00:16:10.073 "auth": { 00:16:10.073 "state": "completed", 00:16:10.073 "digest": "sha256", 00:16:10.073 "dhgroup": "ffdhe4096" 00:16:10.073 } 00:16:10.073 } 00:16:10.073 ]' 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.073 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.331 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.261 09:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.519 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.084 00:16:12.084 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.084 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.084 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.342 { 00:16:12.342 "cntlid": 27, 00:16:12.342 "qid": 0, 00:16:12.342 "state": "enabled", 00:16:12.342 "thread": "nvmf_tgt_poll_group_000", 00:16:12.342 "listen_address": { 00:16:12.342 "trtype": "TCP", 00:16:12.342 "adrfam": "IPv4", 00:16:12.342 "traddr": "10.0.0.2", 00:16:12.342 "trsvcid": "4420" 00:16:12.342 }, 00:16:12.342 "peer_address": { 00:16:12.342 "trtype": "TCP", 00:16:12.342 "adrfam": "IPv4", 00:16:12.342 "traddr": "10.0.0.1", 00:16:12.342 "trsvcid": "55020" 00:16:12.342 }, 00:16:12.342 "auth": { 00:16:12.342 "state": "completed", 00:16:12.342 "digest": "sha256", 00:16:12.342 "dhgroup": "ffdhe4096" 00:16:12.342 } 00:16:12.342 } 00:16:12.342 ]' 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.342 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.600 09:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.533 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.791 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.356 00:16:14.356 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.356 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.356 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.614 { 00:16:14.614 "cntlid": 29, 00:16:14.614 "qid": 0, 00:16:14.614 "state": "enabled", 00:16:14.614 "thread": "nvmf_tgt_poll_group_000", 00:16:14.614 "listen_address": { 00:16:14.614 "trtype": "TCP", 00:16:14.614 "adrfam": "IPv4", 00:16:14.614 "traddr": "10.0.0.2", 00:16:14.614 "trsvcid": "4420" 00:16:14.614 }, 00:16:14.614 "peer_address": { 00:16:14.614 "trtype": "TCP", 00:16:14.614 "adrfam": "IPv4", 00:16:14.614 "traddr": "10.0.0.1", 00:16:14.614 "trsvcid": "55056" 00:16:14.614 }, 00:16:14.614 "auth": { 00:16:14.614 "state": "completed", 00:16:14.614 "digest": "sha256", 00:16:14.614 "dhgroup": "ffdhe4096" 00:16:14.614 } 00:16:14.614 } 00:16:14.614 ]' 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.614 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.871 09:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.804 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.370 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.628 00:16:16.628 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.628 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.628 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.886 { 00:16:16.886 "cntlid": 31, 00:16:16.886 "qid": 0, 00:16:16.886 "state": "enabled", 00:16:16.886 "thread": "nvmf_tgt_poll_group_000", 00:16:16.886 "listen_address": { 00:16:16.886 "trtype": "TCP", 00:16:16.886 "adrfam": "IPv4", 00:16:16.886 "traddr": "10.0.0.2", 00:16:16.886 "trsvcid": "4420" 00:16:16.886 }, 00:16:16.886 "peer_address": { 00:16:16.886 "trtype": "TCP", 00:16:16.886 "adrfam": "IPv4", 00:16:16.886 "traddr": "10.0.0.1", 00:16:16.886 "trsvcid": "55080" 00:16:16.886 }, 00:16:16.886 "auth": { 00:16:16.886 "state": "completed", 00:16:16.886 "digest": "sha256", 00:16:16.886 "dhgroup": "ffdhe4096" 00:16:16.886 } 00:16:16.886 } 00:16:16.886 ]' 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.886 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.451 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.383 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.641 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.206 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.206 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.464 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.464 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.464 { 00:16:19.464 "cntlid": 33, 00:16:19.464 "qid": 0, 00:16:19.464 "state": "enabled", 00:16:19.464 "thread": "nvmf_tgt_poll_group_000", 00:16:19.464 "listen_address": { 00:16:19.464 "trtype": "TCP", 00:16:19.464 "adrfam": "IPv4", 00:16:19.464 "traddr": "10.0.0.2", 00:16:19.464 "trsvcid": "4420" 00:16:19.464 }, 00:16:19.465 "peer_address": { 00:16:19.465 "trtype": "TCP", 00:16:19.465 "adrfam": "IPv4", 00:16:19.465 "traddr": "10.0.0.1", 00:16:19.465 "trsvcid": "34312" 00:16:19.465 }, 00:16:19.465 "auth": { 00:16:19.465 "state": "completed", 00:16:19.465 "digest": "sha256", 00:16:19.465 "dhgroup": "ffdhe6144" 00:16:19.465 } 00:16:19.465 } 00:16:19.465 ]' 00:16:19.465 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.465 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.465 09:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.465 09:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.465 09:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.465 09:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.465 09:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.465 09:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.722 09:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.655 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.912 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.477 00:16:21.477 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.477 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.477 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.735 { 00:16:21.735 "cntlid": 35, 00:16:21.735 "qid": 0, 00:16:21.735 "state": "enabled", 00:16:21.735 "thread": "nvmf_tgt_poll_group_000", 00:16:21.735 "listen_address": { 00:16:21.735 "trtype": "TCP", 00:16:21.735 "adrfam": "IPv4", 00:16:21.735 "traddr": "10.0.0.2", 00:16:21.735 "trsvcid": "4420" 00:16:21.735 }, 00:16:21.735 "peer_address": { 00:16:21.735 "trtype": "TCP", 00:16:21.735 "adrfam": "IPv4", 00:16:21.735 "traddr": "10.0.0.1", 00:16:21.735 "trsvcid": "34340" 00:16:21.735 }, 00:16:21.735 "auth": { 00:16:21.735 "state": "completed", 00:16:21.735 "digest": "sha256", 00:16:21.735 "dhgroup": "ffdhe6144" 00:16:21.735 } 00:16:21.735 } 00:16:21.735 ]' 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.735 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.993 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.993 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.993 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.993 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.993 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.251 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.181 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.437 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.438 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.002 00:16:24.002 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.002 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.002 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.259 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.259 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.259 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.259 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.259 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.259 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.260 { 00:16:24.260 "cntlid": 37, 00:16:24.260 "qid": 0, 00:16:24.260 "state": "enabled", 00:16:24.260 "thread": "nvmf_tgt_poll_group_000", 00:16:24.260 "listen_address": { 00:16:24.260 "trtype": "TCP", 00:16:24.260 "adrfam": "IPv4", 00:16:24.260 "traddr": "10.0.0.2", 00:16:24.260 "trsvcid": "4420" 00:16:24.260 }, 00:16:24.260 "peer_address": { 00:16:24.260 "trtype": "TCP", 00:16:24.260 "adrfam": "IPv4", 00:16:24.260 "traddr": "10.0.0.1", 00:16:24.260 "trsvcid": "34370" 00:16:24.260 }, 00:16:24.260 "auth": { 00:16:24.260 "state": "completed", 00:16:24.260 "digest": "sha256", 00:16:24.260 "dhgroup": "ffdhe6144" 00:16:24.260 } 00:16:24.260 } 00:16:24.260 ]' 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.260 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.517 09:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:25.450 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.015 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:26.015 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.015 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.016 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.580 00:16:26.580 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.580 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.580 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.580 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.580 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.580 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.836 { 00:16:26.836 "cntlid": 39, 00:16:26.836 "qid": 0, 00:16:26.836 "state": "enabled", 00:16:26.836 "thread": "nvmf_tgt_poll_group_000", 00:16:26.836 "listen_address": { 00:16:26.836 "trtype": "TCP", 00:16:26.836 "adrfam": "IPv4", 00:16:26.836 "traddr": "10.0.0.2", 00:16:26.836 "trsvcid": "4420" 00:16:26.836 }, 00:16:26.836 "peer_address": { 00:16:26.836 "trtype": "TCP", 00:16:26.836 "adrfam": "IPv4", 00:16:26.836 "traddr": "10.0.0.1", 00:16:26.836 "trsvcid": "34402" 00:16:26.836 }, 00:16:26.836 "auth": { 00:16:26.836 "state": "completed", 00:16:26.836 "digest": "sha256", 00:16:26.836 "dhgroup": "ffdhe6144" 00:16:26.836 } 00:16:26.836 } 00:16:26.836 ]' 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.836 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.093 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:28.025 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.284 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.216 00:16:29.216 09:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.216 09:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.216 09:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.474 { 00:16:29.474 "cntlid": 41, 00:16:29.474 "qid": 0, 00:16:29.474 "state": "enabled", 00:16:29.474 "thread": "nvmf_tgt_poll_group_000", 00:16:29.474 "listen_address": { 00:16:29.474 "trtype": "TCP", 00:16:29.474 "adrfam": "IPv4", 00:16:29.474 "traddr": "10.0.0.2", 00:16:29.474 "trsvcid": "4420" 00:16:29.474 }, 00:16:29.474 "peer_address": { 00:16:29.474 "trtype": "TCP", 00:16:29.474 "adrfam": "IPv4", 00:16:29.474 "traddr": "10.0.0.1", 00:16:29.474 "trsvcid": "51650" 00:16:29.474 }, 00:16:29.474 "auth": { 00:16:29.474 "state": "completed", 00:16:29.474 "digest": "sha256", 00:16:29.474 "dhgroup": "ffdhe8192" 00:16:29.474 } 00:16:29.474 } 00:16:29.474 ]' 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.474 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.732 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.732 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.732 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.732 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.732 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.990 09:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:30.922 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.179 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.112 00:16:32.112 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.112 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.112 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.370 { 00:16:32.370 "cntlid": 43, 00:16:32.370 "qid": 0, 00:16:32.370 "state": "enabled", 00:16:32.370 "thread": "nvmf_tgt_poll_group_000", 00:16:32.370 "listen_address": { 00:16:32.370 "trtype": "TCP", 00:16:32.370 "adrfam": "IPv4", 00:16:32.370 "traddr": "10.0.0.2", 00:16:32.370 "trsvcid": "4420" 00:16:32.370 }, 00:16:32.370 "peer_address": { 00:16:32.370 "trtype": "TCP", 00:16:32.370 "adrfam": "IPv4", 00:16:32.370 "traddr": "10.0.0.1", 00:16:32.370 "trsvcid": "51690" 00:16:32.370 }, 00:16:32.370 "auth": { 00:16:32.370 "state": "completed", 00:16:32.370 "digest": "sha256", 00:16:32.370 "dhgroup": "ffdhe8192" 00:16:32.370 } 00:16:32.370 } 00:16:32.370 ]' 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.370 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.370 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.370 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.370 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.370 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.370 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.628 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:16:33.559 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.559 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:33.559 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.560 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.560 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.560 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.560 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.560 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.123 09:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.053 00:16:35.053 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.053 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.053 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.053 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.054 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.054 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.054 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.054 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.054 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.054 { 00:16:35.054 "cntlid": 45, 00:16:35.054 "qid": 0, 00:16:35.054 "state": "enabled", 00:16:35.054 "thread": "nvmf_tgt_poll_group_000", 00:16:35.054 "listen_address": { 00:16:35.054 "trtype": "TCP", 00:16:35.054 "adrfam": "IPv4", 00:16:35.054 "traddr": "10.0.0.2", 00:16:35.054 "trsvcid": "4420" 00:16:35.054 }, 00:16:35.054 "peer_address": { 00:16:35.054 "trtype": "TCP", 00:16:35.054 "adrfam": "IPv4", 00:16:35.054 "traddr": "10.0.0.1", 00:16:35.054 "trsvcid": "51706" 00:16:35.054 }, 00:16:35.054 "auth": { 00:16:35.054 "state": "completed", 00:16:35.054 "digest": "sha256", 00:16:35.054 "dhgroup": "ffdhe8192" 00:16:35.054 } 00:16:35.054 } 00:16:35.054 ]' 00:16:35.054 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.310 09:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.568 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.501 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.759 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.698 00:16:37.698 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.698 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.698 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.955 { 00:16:37.955 "cntlid": 47, 00:16:37.955 "qid": 0, 00:16:37.955 "state": "enabled", 00:16:37.955 "thread": "nvmf_tgt_poll_group_000", 00:16:37.955 "listen_address": { 00:16:37.955 "trtype": "TCP", 00:16:37.955 "adrfam": "IPv4", 00:16:37.955 "traddr": "10.0.0.2", 00:16:37.955 "trsvcid": "4420" 00:16:37.955 }, 00:16:37.955 "peer_address": { 00:16:37.955 "trtype": "TCP", 00:16:37.955 "adrfam": "IPv4", 00:16:37.955 "traddr": "10.0.0.1", 00:16:37.955 "trsvcid": "51736" 00:16:37.955 }, 00:16:37.955 "auth": { 00:16:37.955 "state": "completed", 00:16:37.955 "digest": "sha256", 00:16:37.955 "dhgroup": "ffdhe8192" 00:16:37.955 } 00:16:37.955 } 00:16:37.955 ]' 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.955 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.212 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.212 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.212 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.470 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.403 09:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.661 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.918 00:16:39.918 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.918 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.918 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.176 { 00:16:40.176 "cntlid": 49, 00:16:40.176 "qid": 0, 00:16:40.176 "state": "enabled", 00:16:40.176 "thread": "nvmf_tgt_poll_group_000", 00:16:40.176 "listen_address": { 00:16:40.176 "trtype": "TCP", 00:16:40.176 "adrfam": "IPv4", 00:16:40.176 "traddr": "10.0.0.2", 00:16:40.176 "trsvcid": "4420" 00:16:40.176 }, 00:16:40.176 "peer_address": { 00:16:40.176 "trtype": "TCP", 00:16:40.176 "adrfam": "IPv4", 00:16:40.176 "traddr": "10.0.0.1", 00:16:40.176 "trsvcid": "47180" 00:16:40.176 }, 00:16:40.176 "auth": { 00:16:40.176 "state": "completed", 00:16:40.176 "digest": "sha384", 00:16:40.176 "dhgroup": "null" 00:16:40.176 } 00:16:40.176 } 00:16:40.176 ]' 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.176 09:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.434 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:41.365 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.365 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:41.365 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.365 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.366 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.366 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.366 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.366 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.931 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.189 00:16:42.189 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.189 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.189 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.447 { 00:16:42.447 "cntlid": 51, 00:16:42.447 "qid": 0, 00:16:42.447 "state": "enabled", 00:16:42.447 "thread": "nvmf_tgt_poll_group_000", 00:16:42.447 "listen_address": { 00:16:42.447 "trtype": "TCP", 00:16:42.447 "adrfam": "IPv4", 00:16:42.447 "traddr": "10.0.0.2", 00:16:42.447 "trsvcid": "4420" 00:16:42.447 }, 00:16:42.447 "peer_address": { 00:16:42.447 "trtype": "TCP", 00:16:42.447 "adrfam": "IPv4", 00:16:42.447 "traddr": "10.0.0.1", 00:16:42.447 "trsvcid": "47194" 00:16:42.447 }, 00:16:42.447 "auth": { 00:16:42.447 "state": "completed", 00:16:42.447 "digest": "sha384", 00:16:42.447 "dhgroup": "null" 00:16:42.447 } 00:16:42.447 } 00:16:42.447 ]' 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.447 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.447 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:42.447 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.447 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.447 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.447 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.704 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.638 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.896 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.154 00:16:44.154 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.154 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.154 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.412 { 00:16:44.412 "cntlid": 53, 00:16:44.412 "qid": 0, 00:16:44.412 "state": "enabled", 00:16:44.412 "thread": "nvmf_tgt_poll_group_000", 00:16:44.412 "listen_address": { 00:16:44.412 "trtype": "TCP", 00:16:44.412 "adrfam": "IPv4", 00:16:44.412 "traddr": "10.0.0.2", 00:16:44.412 "trsvcid": "4420" 00:16:44.412 }, 00:16:44.412 "peer_address": { 00:16:44.412 "trtype": "TCP", 00:16:44.412 "adrfam": "IPv4", 00:16:44.412 "traddr": "10.0.0.1", 00:16:44.412 "trsvcid": "47216" 00:16:44.412 }, 00:16:44.412 "auth": { 00:16:44.412 "state": "completed", 00:16:44.412 "digest": "sha384", 00:16:44.412 "dhgroup": "null" 00:16:44.412 } 00:16:44.412 } 00:16:44.412 ]' 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.412 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.669 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:44.669 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.670 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.670 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.670 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.927 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:45.860 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.118 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.376 00:16:46.376 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.376 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.376 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.634 { 00:16:46.634 "cntlid": 55, 00:16:46.634 "qid": 0, 00:16:46.634 "state": "enabled", 00:16:46.634 "thread": "nvmf_tgt_poll_group_000", 00:16:46.634 "listen_address": { 00:16:46.634 "trtype": "TCP", 00:16:46.634 "adrfam": "IPv4", 00:16:46.634 "traddr": "10.0.0.2", 00:16:46.634 "trsvcid": "4420" 00:16:46.634 }, 00:16:46.634 "peer_address": { 00:16:46.634 "trtype": "TCP", 00:16:46.634 "adrfam": "IPv4", 00:16:46.634 "traddr": "10.0.0.1", 00:16:46.634 "trsvcid": "47230" 00:16:46.634 }, 00:16:46.634 "auth": { 00:16:46.634 "state": "completed", 00:16:46.634 "digest": "sha384", 00:16:46.634 "dhgroup": "null" 00:16:46.634 } 00:16:46.634 } 00:16:46.634 ]' 00:16:46.634 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.892 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.150 09:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.082 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.339 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.597 00:16:48.854 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.854 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.854 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.111 { 00:16:49.111 "cntlid": 57, 00:16:49.111 "qid": 0, 00:16:49.111 "state": "enabled", 00:16:49.111 "thread": "nvmf_tgt_poll_group_000", 00:16:49.111 "listen_address": { 00:16:49.111 "trtype": "TCP", 00:16:49.111 "adrfam": "IPv4", 00:16:49.111 "traddr": "10.0.0.2", 00:16:49.111 "trsvcid": "4420" 00:16:49.111 }, 00:16:49.111 "peer_address": { 00:16:49.111 "trtype": "TCP", 00:16:49.111 "adrfam": "IPv4", 00:16:49.111 "traddr": "10.0.0.1", 00:16:49.111 "trsvcid": "36678" 00:16:49.111 }, 00:16:49.111 "auth": { 00:16:49.111 "state": "completed", 00:16:49.111 "digest": "sha384", 00:16:49.111 "dhgroup": "ffdhe2048" 00:16:49.111 } 00:16:49.111 } 00:16:49.111 ]' 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.111 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.369 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:50.302 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.303 09:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.560 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.124 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.124 { 00:16:51.124 "cntlid": 59, 00:16:51.124 "qid": 0, 00:16:51.124 "state": "enabled", 00:16:51.124 "thread": "nvmf_tgt_poll_group_000", 00:16:51.124 "listen_address": { 00:16:51.124 "trtype": "TCP", 00:16:51.124 "adrfam": "IPv4", 00:16:51.124 "traddr": "10.0.0.2", 00:16:51.124 "trsvcid": "4420" 00:16:51.124 }, 00:16:51.124 "peer_address": { 00:16:51.124 "trtype": "TCP", 00:16:51.124 "adrfam": "IPv4", 00:16:51.124 "traddr": "10.0.0.1", 00:16:51.124 "trsvcid": "36696" 00:16:51.124 }, 00:16:51.124 "auth": { 00:16:51.124 "state": "completed", 00:16:51.124 "digest": "sha384", 00:16:51.124 "dhgroup": "ffdhe2048" 00:16:51.124 } 00:16:51.124 } 00:16:51.124 ]' 00:16:51.124 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.381 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.639 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.571 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.828 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.829 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.829 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.829 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.829 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.086 00:16:53.086 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.086 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.086 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.344 { 00:16:53.344 "cntlid": 61, 00:16:53.344 "qid": 0, 00:16:53.344 "state": "enabled", 00:16:53.344 "thread": "nvmf_tgt_poll_group_000", 00:16:53.344 "listen_address": { 00:16:53.344 "trtype": "TCP", 00:16:53.344 "adrfam": "IPv4", 00:16:53.344 "traddr": "10.0.0.2", 00:16:53.344 "trsvcid": "4420" 00:16:53.344 }, 00:16:53.344 "peer_address": { 00:16:53.344 "trtype": "TCP", 00:16:53.344 "adrfam": "IPv4", 00:16:53.344 "traddr": "10.0.0.1", 00:16:53.344 "trsvcid": "36730" 00:16:53.344 }, 00:16:53.344 "auth": { 00:16:53.344 "state": "completed", 00:16:53.344 "digest": "sha384", 00:16:53.344 "dhgroup": "ffdhe2048" 00:16:53.344 } 00:16:53.344 } 00:16:53.344 ]' 00:16:53.344 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.344 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.344 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.344 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.344 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.601 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.601 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.601 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.601 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.974 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.975 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.233 00:16:55.491 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.491 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.491 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.491 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.749 { 00:16:55.749 "cntlid": 63, 00:16:55.749 "qid": 0, 00:16:55.749 "state": "enabled", 00:16:55.749 "thread": "nvmf_tgt_poll_group_000", 00:16:55.749 "listen_address": { 00:16:55.749 "trtype": "TCP", 00:16:55.749 "adrfam": "IPv4", 00:16:55.749 "traddr": "10.0.0.2", 00:16:55.749 "trsvcid": "4420" 00:16:55.749 }, 00:16:55.749 "peer_address": { 00:16:55.749 "trtype": "TCP", 00:16:55.749 "adrfam": "IPv4", 00:16:55.749 "traddr": "10.0.0.1", 00:16:55.749 "trsvcid": "36746" 00:16:55.749 }, 00:16:55.749 "auth": { 00:16:55.749 "state": "completed", 00:16:55.749 "digest": "sha384", 00:16:55.749 "dhgroup": "ffdhe2048" 00:16:55.749 } 00:16:55.749 } 00:16:55.749 ]' 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.749 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.007 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.941 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.199 09:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.764 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.764 { 00:16:57.764 "cntlid": 65, 00:16:57.764 "qid": 0, 00:16:57.764 "state": "enabled", 00:16:57.764 "thread": "nvmf_tgt_poll_group_000", 00:16:57.764 "listen_address": { 00:16:57.764 "trtype": "TCP", 00:16:57.764 "adrfam": "IPv4", 00:16:57.764 "traddr": "10.0.0.2", 00:16:57.764 "trsvcid": "4420" 00:16:57.764 }, 00:16:57.764 "peer_address": { 00:16:57.764 "trtype": "TCP", 00:16:57.764 "adrfam": "IPv4", 00:16:57.764 "traddr": "10.0.0.1", 00:16:57.764 "trsvcid": "36778" 00:16:57.764 }, 00:16:57.764 "auth": { 00:16:57.764 "state": "completed", 00:16:57.764 "digest": "sha384", 00:16:57.764 "dhgroup": "ffdhe3072" 00:16:57.764 } 00:16:57.764 } 00:16:57.764 ]' 00:16:57.764 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.021 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.278 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.210 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.468 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.034 00:17:00.034 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.034 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.034 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.291 { 00:17:00.291 "cntlid": 67, 00:17:00.291 "qid": 0, 00:17:00.291 "state": "enabled", 00:17:00.291 "thread": "nvmf_tgt_poll_group_000", 00:17:00.291 "listen_address": { 00:17:00.291 "trtype": "TCP", 00:17:00.291 "adrfam": "IPv4", 00:17:00.291 "traddr": "10.0.0.2", 00:17:00.291 "trsvcid": "4420" 00:17:00.291 }, 00:17:00.291 "peer_address": { 00:17:00.291 "trtype": "TCP", 00:17:00.291 "adrfam": "IPv4", 00:17:00.291 "traddr": "10.0.0.1", 00:17:00.291 "trsvcid": "55610" 00:17:00.291 }, 00:17:00.291 "auth": { 00:17:00.291 "state": "completed", 00:17:00.291 "digest": "sha384", 00:17:00.291 "dhgroup": "ffdhe3072" 00:17:00.291 } 00:17:00.291 } 00:17:00.291 ]' 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.291 09:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.549 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:01.483 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.484 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.049 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.307 00:17:02.307 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.307 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.307 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.565 { 00:17:02.565 "cntlid": 69, 00:17:02.565 "qid": 0, 00:17:02.565 "state": "enabled", 00:17:02.565 "thread": "nvmf_tgt_poll_group_000", 00:17:02.565 "listen_address": { 00:17:02.565 "trtype": "TCP", 00:17:02.565 "adrfam": "IPv4", 00:17:02.565 "traddr": "10.0.0.2", 00:17:02.565 "trsvcid": "4420" 00:17:02.565 }, 00:17:02.565 "peer_address": { 00:17:02.565 "trtype": "TCP", 00:17:02.565 "adrfam": "IPv4", 00:17:02.565 "traddr": "10.0.0.1", 00:17:02.565 "trsvcid": "55644" 00:17:02.565 }, 00:17:02.565 "auth": { 00:17:02.565 "state": "completed", 00:17:02.565 "digest": "sha384", 00:17:02.565 "dhgroup": "ffdhe3072" 00:17:02.565 } 00:17:02.565 } 00:17:02.565 ]' 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.565 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.822 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.756 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.013 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.579 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.579 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.836 { 00:17:04.836 "cntlid": 71, 00:17:04.836 "qid": 0, 00:17:04.836 "state": "enabled", 00:17:04.836 "thread": "nvmf_tgt_poll_group_000", 00:17:04.836 "listen_address": { 00:17:04.836 "trtype": "TCP", 00:17:04.836 "adrfam": "IPv4", 00:17:04.836 "traddr": "10.0.0.2", 00:17:04.836 "trsvcid": "4420" 00:17:04.836 }, 00:17:04.836 "peer_address": { 00:17:04.836 "trtype": "TCP", 00:17:04.836 "adrfam": "IPv4", 00:17:04.836 "traddr": "10.0.0.1", 00:17:04.836 "trsvcid": "55670" 00:17:04.836 }, 00:17:04.836 "auth": { 00:17:04.836 "state": "completed", 00:17:04.836 "digest": "sha384", 00:17:04.836 "dhgroup": "ffdhe3072" 00:17:04.836 } 00:17:04.836 } 00:17:04.836 ]' 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.836 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.094 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:17:06.026 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.027 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.285 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.851 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.851 { 00:17:06.851 "cntlid": 73, 00:17:06.851 "qid": 0, 00:17:06.851 "state": "enabled", 00:17:06.851 "thread": "nvmf_tgt_poll_group_000", 00:17:06.851 "listen_address": { 00:17:06.851 "trtype": "TCP", 00:17:06.851 "adrfam": "IPv4", 00:17:06.851 "traddr": "10.0.0.2", 00:17:06.851 "trsvcid": "4420" 00:17:06.851 }, 00:17:06.851 "peer_address": { 00:17:06.851 "trtype": "TCP", 00:17:06.851 "adrfam": "IPv4", 00:17:06.851 "traddr": "10.0.0.1", 00:17:06.851 "trsvcid": "55688" 00:17:06.851 }, 00:17:06.851 "auth": { 00:17:06.851 "state": "completed", 00:17:06.851 "digest": "sha384", 00:17:06.851 "dhgroup": "ffdhe4096" 00:17:06.851 } 00:17:06.851 } 00:17:06.851 ]' 00:17:06.851 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.109 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.367 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.300 09:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.578 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.855 00:17:09.140 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.141 { 00:17:09.141 "cntlid": 75, 00:17:09.141 "qid": 0, 00:17:09.141 "state": "enabled", 00:17:09.141 "thread": "nvmf_tgt_poll_group_000", 00:17:09.141 "listen_address": { 00:17:09.141 "trtype": "TCP", 00:17:09.141 "adrfam": "IPv4", 00:17:09.141 "traddr": "10.0.0.2", 00:17:09.141 "trsvcid": "4420" 00:17:09.141 }, 00:17:09.141 "peer_address": { 00:17:09.141 "trtype": "TCP", 00:17:09.141 "adrfam": "IPv4", 00:17:09.141 "traddr": "10.0.0.1", 00:17:09.141 "trsvcid": "49774" 00:17:09.141 }, 00:17:09.141 "auth": { 00:17:09.141 "state": "completed", 00:17:09.141 "digest": "sha384", 00:17:09.141 "dhgroup": "ffdhe4096" 00:17:09.141 } 00:17:09.141 } 00:17:09.141 ]' 00:17:09.141 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.411 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.673 09:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.609 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.865 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.431 00:17:11.431 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.431 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.431 09:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.431 { 00:17:11.431 "cntlid": 77, 00:17:11.431 "qid": 0, 00:17:11.431 "state": "enabled", 00:17:11.431 "thread": "nvmf_tgt_poll_group_000", 00:17:11.431 "listen_address": { 00:17:11.431 "trtype": "TCP", 00:17:11.431 "adrfam": "IPv4", 00:17:11.431 "traddr": "10.0.0.2", 00:17:11.431 "trsvcid": "4420" 00:17:11.431 }, 00:17:11.431 "peer_address": { 00:17:11.431 "trtype": "TCP", 00:17:11.431 "adrfam": "IPv4", 00:17:11.431 "traddr": "10.0.0.1", 00:17:11.431 "trsvcid": "49810" 00:17:11.431 }, 00:17:11.431 "auth": { 00:17:11.431 "state": "completed", 00:17:11.431 "digest": "sha384", 00:17:11.431 "dhgroup": "ffdhe4096" 00:17:11.431 } 00:17:11.431 } 00:17:11.431 ]' 00:17:11.431 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.689 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.947 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.878 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.136 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.393 00:17:13.393 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.393 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.393 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.651 { 00:17:13.651 "cntlid": 79, 00:17:13.651 "qid": 0, 00:17:13.651 "state": "enabled", 00:17:13.651 "thread": "nvmf_tgt_poll_group_000", 00:17:13.651 "listen_address": { 00:17:13.651 "trtype": "TCP", 00:17:13.651 "adrfam": "IPv4", 00:17:13.651 "traddr": "10.0.0.2", 00:17:13.651 "trsvcid": "4420" 00:17:13.651 }, 00:17:13.651 "peer_address": { 00:17:13.651 "trtype": "TCP", 00:17:13.651 "adrfam": "IPv4", 00:17:13.651 "traddr": "10.0.0.1", 00:17:13.651 "trsvcid": "49832" 00:17:13.651 }, 00:17:13.651 "auth": { 00:17:13.651 "state": "completed", 00:17:13.651 "digest": "sha384", 00:17:13.651 "dhgroup": "ffdhe4096" 00:17:13.651 } 00:17:13.651 } 00:17:13.651 ]' 00:17:13.651 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.909 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.167 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.099 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.357 09:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.923 00:17:15.923 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.923 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.923 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.181 { 00:17:16.181 "cntlid": 81, 00:17:16.181 "qid": 0, 00:17:16.181 "state": "enabled", 00:17:16.181 "thread": "nvmf_tgt_poll_group_000", 00:17:16.181 "listen_address": { 00:17:16.181 "trtype": "TCP", 00:17:16.181 "adrfam": "IPv4", 00:17:16.181 "traddr": "10.0.0.2", 00:17:16.181 "trsvcid": "4420" 00:17:16.181 }, 00:17:16.181 "peer_address": { 00:17:16.181 "trtype": "TCP", 00:17:16.181 "adrfam": "IPv4", 00:17:16.181 "traddr": "10.0.0.1", 00:17:16.181 "trsvcid": "49862" 00:17:16.181 }, 00:17:16.181 "auth": { 00:17:16.181 "state": "completed", 00:17:16.181 "digest": "sha384", 00:17:16.181 "dhgroup": "ffdhe6144" 00:17:16.181 } 00:17:16.181 } 00:17:16.181 ]' 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.181 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.439 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.372 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.630 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.195 00:17:18.196 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.196 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.196 09:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.453 { 00:17:18.453 "cntlid": 83, 00:17:18.453 "qid": 0, 00:17:18.453 "state": "enabled", 00:17:18.453 "thread": "nvmf_tgt_poll_group_000", 00:17:18.453 "listen_address": { 00:17:18.453 "trtype": "TCP", 00:17:18.453 "adrfam": "IPv4", 00:17:18.453 "traddr": "10.0.0.2", 00:17:18.453 "trsvcid": "4420" 00:17:18.453 }, 00:17:18.453 "peer_address": { 00:17:18.453 "trtype": "TCP", 00:17:18.453 "adrfam": "IPv4", 00:17:18.453 "traddr": "10.0.0.1", 00:17:18.453 "trsvcid": "45952" 00:17:18.453 }, 00:17:18.453 "auth": { 00:17:18.453 "state": "completed", 00:17:18.453 "digest": "sha384", 00:17:18.453 "dhgroup": "ffdhe6144" 00:17:18.453 } 00:17:18.453 } 00:17:18.453 ]' 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.453 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.711 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.711 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.711 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.711 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.711 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.969 09:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.901 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.160 09:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.724 00:17:20.724 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.724 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.724 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.980 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.980 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.980 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.981 { 00:17:20.981 "cntlid": 85, 00:17:20.981 "qid": 0, 00:17:20.981 "state": "enabled", 00:17:20.981 "thread": "nvmf_tgt_poll_group_000", 00:17:20.981 "listen_address": { 00:17:20.981 "trtype": "TCP", 00:17:20.981 "adrfam": "IPv4", 00:17:20.981 "traddr": "10.0.0.2", 00:17:20.981 "trsvcid": "4420" 00:17:20.981 }, 00:17:20.981 "peer_address": { 00:17:20.981 "trtype": "TCP", 00:17:20.981 "adrfam": "IPv4", 00:17:20.981 "traddr": "10.0.0.1", 00:17:20.981 "trsvcid": "45990" 00:17:20.981 }, 00:17:20.981 "auth": { 00:17:20.981 "state": "completed", 00:17:20.981 "digest": "sha384", 00:17:20.981 "dhgroup": "ffdhe6144" 00:17:20.981 } 00:17:20.981 } 00:17:20.981 ]' 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.981 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.238 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.238 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.238 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.496 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:17:22.429 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.429 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.687 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.252 00:17:23.252 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.252 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.252 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.510 { 00:17:23.510 "cntlid": 87, 00:17:23.510 "qid": 0, 00:17:23.510 "state": "enabled", 00:17:23.510 "thread": "nvmf_tgt_poll_group_000", 00:17:23.510 "listen_address": { 00:17:23.510 "trtype": "TCP", 00:17:23.510 "adrfam": "IPv4", 00:17:23.510 "traddr": "10.0.0.2", 00:17:23.510 "trsvcid": "4420" 00:17:23.510 }, 00:17:23.510 "peer_address": { 00:17:23.510 "trtype": "TCP", 00:17:23.510 "adrfam": "IPv4", 00:17:23.510 "traddr": "10.0.0.1", 00:17:23.510 "trsvcid": "46026" 00:17:23.510 }, 00:17:23.510 "auth": { 00:17:23.510 "state": "completed", 00:17:23.510 "digest": "sha384", 00:17:23.510 "dhgroup": "ffdhe6144" 00:17:23.510 } 00:17:23.510 } 00:17:23.510 ]' 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.510 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.768 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:17:24.700 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.700 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:24.700 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.700 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.700 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.700 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.701 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.701 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.701 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.266 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.832 00:17:25.832 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.832 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.832 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.090 { 00:17:26.090 "cntlid": 89, 00:17:26.090 "qid": 0, 00:17:26.090 "state": "enabled", 00:17:26.090 "thread": "nvmf_tgt_poll_group_000", 00:17:26.090 "listen_address": { 00:17:26.090 "trtype": "TCP", 00:17:26.090 "adrfam": "IPv4", 00:17:26.090 "traddr": "10.0.0.2", 00:17:26.090 "trsvcid": "4420" 00:17:26.090 }, 00:17:26.090 "peer_address": { 00:17:26.090 "trtype": "TCP", 00:17:26.090 "adrfam": "IPv4", 00:17:26.090 "traddr": "10.0.0.1", 00:17:26.090 "trsvcid": "46048" 00:17:26.090 }, 00:17:26.090 "auth": { 00:17:26.090 "state": "completed", 00:17:26.090 "digest": "sha384", 00:17:26.090 "dhgroup": "ffdhe8192" 00:17:26.090 } 00:17:26.090 } 00:17:26.090 ]' 00:17:26.090 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.348 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.606 09:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.539 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.797 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.729 00:17:28.729 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.729 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.729 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.986 { 00:17:28.986 "cntlid": 91, 00:17:28.986 "qid": 0, 00:17:28.986 "state": "enabled", 00:17:28.986 "thread": "nvmf_tgt_poll_group_000", 00:17:28.986 "listen_address": { 00:17:28.986 "trtype": "TCP", 00:17:28.986 "adrfam": "IPv4", 00:17:28.986 "traddr": "10.0.0.2", 00:17:28.986 "trsvcid": "4420" 00:17:28.986 }, 00:17:28.986 "peer_address": { 00:17:28.986 "trtype": "TCP", 00:17:28.986 "adrfam": "IPv4", 00:17:28.986 "traddr": "10.0.0.1", 00:17:28.986 "trsvcid": "33456" 00:17:28.986 }, 00:17:28.986 "auth": { 00:17:28.986 "state": "completed", 00:17:28.986 "digest": "sha384", 00:17:28.986 "dhgroup": "ffdhe8192" 00:17:28.986 } 00:17:28.986 } 00:17:28.986 ]' 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.986 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.244 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.244 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.244 09:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.501 09:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.434 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.697 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.698 09:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.641 00:17:31.641 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.641 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.641 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.899 { 00:17:31.899 "cntlid": 93, 00:17:31.899 "qid": 0, 00:17:31.899 "state": "enabled", 00:17:31.899 "thread": "nvmf_tgt_poll_group_000", 00:17:31.899 "listen_address": { 00:17:31.899 "trtype": "TCP", 00:17:31.899 "adrfam": "IPv4", 00:17:31.899 "traddr": "10.0.0.2", 00:17:31.899 "trsvcid": "4420" 00:17:31.899 }, 00:17:31.899 "peer_address": { 00:17:31.899 "trtype": "TCP", 00:17:31.899 "adrfam": "IPv4", 00:17:31.899 "traddr": "10.0.0.1", 00:17:31.899 "trsvcid": "33480" 00:17:31.899 }, 00:17:31.899 "auth": { 00:17:31.899 "state": "completed", 00:17:31.899 "digest": "sha384", 00:17:31.899 "dhgroup": "ffdhe8192" 00:17:31.899 } 00:17:31.899 } 00:17:31.899 ]' 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.899 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.157 09:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:17:33.089 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.347 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.604 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.605 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.605 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.605 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.538 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.538 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.795 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.795 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.795 { 00:17:34.795 "cntlid": 95, 00:17:34.795 "qid": 0, 00:17:34.795 "state": "enabled", 00:17:34.795 "thread": "nvmf_tgt_poll_group_000", 00:17:34.795 "listen_address": { 00:17:34.795 "trtype": "TCP", 00:17:34.795 "adrfam": "IPv4", 00:17:34.795 "traddr": "10.0.0.2", 00:17:34.795 "trsvcid": "4420" 00:17:34.795 }, 00:17:34.795 "peer_address": { 00:17:34.795 "trtype": "TCP", 00:17:34.795 "adrfam": "IPv4", 00:17:34.795 "traddr": "10.0.0.1", 00:17:34.795 "trsvcid": "33494" 00:17:34.795 }, 00:17:34.795 "auth": { 00:17:34.795 "state": "completed", 00:17:34.795 "digest": "sha384", 00:17:34.795 "dhgroup": "ffdhe8192" 00:17:34.795 } 00:17:34.795 } 00:17:34.795 ]' 00:17:34.795 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.795 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.795 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.795 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.796 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.796 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.796 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.796 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.053 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.987 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.245 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.246 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.246 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.503 00:17:36.504 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.504 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.504 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.761 { 00:17:36.761 "cntlid": 97, 00:17:36.761 "qid": 0, 00:17:36.761 "state": "enabled", 00:17:36.761 "thread": "nvmf_tgt_poll_group_000", 00:17:36.761 "listen_address": { 00:17:36.761 "trtype": "TCP", 00:17:36.761 "adrfam": "IPv4", 00:17:36.761 "traddr": "10.0.0.2", 00:17:36.761 "trsvcid": "4420" 00:17:36.761 }, 00:17:36.761 "peer_address": { 00:17:36.761 "trtype": "TCP", 00:17:36.761 "adrfam": "IPv4", 00:17:36.761 "traddr": "10.0.0.1", 00:17:36.761 "trsvcid": "33514" 00:17:36.761 }, 00:17:36.761 "auth": { 00:17:36.761 "state": "completed", 00:17:36.761 "digest": "sha512", 00:17:36.761 "dhgroup": "null" 00:17:36.761 } 00:17:36.761 } 00:17:36.761 ]' 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.761 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.019 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.019 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.019 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.019 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.019 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.276 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:17:38.208 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.209 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.724 00:17:38.724 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.724 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.724 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.982 { 00:17:38.982 "cntlid": 99, 00:17:38.982 "qid": 0, 00:17:38.982 "state": "enabled", 00:17:38.982 "thread": "nvmf_tgt_poll_group_000", 00:17:38.982 "listen_address": { 00:17:38.982 "trtype": "TCP", 00:17:38.982 "adrfam": "IPv4", 00:17:38.982 "traddr": "10.0.0.2", 00:17:38.982 "trsvcid": "4420" 00:17:38.982 }, 00:17:38.982 "peer_address": { 00:17:38.982 "trtype": "TCP", 00:17:38.982 "adrfam": "IPv4", 00:17:38.982 "traddr": "10.0.0.1", 00:17:38.982 "trsvcid": "35374" 00:17:38.982 }, 00:17:38.982 "auth": { 00:17:38.982 "state": "completed", 00:17:38.982 "digest": "sha512", 00:17:38.982 "dhgroup": "null" 00:17:38.982 } 00:17:38.982 } 00:17:38.982 ]' 00:17:38.982 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.240 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.498 09:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.430 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.688 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.253 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.253 { 00:17:41.253 "cntlid": 101, 00:17:41.253 "qid": 0, 00:17:41.253 "state": "enabled", 00:17:41.253 "thread": "nvmf_tgt_poll_group_000", 00:17:41.253 "listen_address": { 00:17:41.253 "trtype": "TCP", 00:17:41.253 "adrfam": "IPv4", 00:17:41.253 "traddr": "10.0.0.2", 00:17:41.253 "trsvcid": "4420" 00:17:41.253 }, 00:17:41.253 "peer_address": { 00:17:41.253 "trtype": "TCP", 00:17:41.253 "adrfam": "IPv4", 00:17:41.253 "traddr": "10.0.0.1", 00:17:41.253 "trsvcid": "35390" 00:17:41.253 }, 00:17:41.253 "auth": { 00:17:41.253 "state": "completed", 00:17:41.253 "digest": "sha512", 00:17:41.253 "dhgroup": "null" 00:17:41.253 } 00:17:41.253 } 00:17:41.253 ]' 00:17:41.253 09:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.512 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.771 09:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.703 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.961 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.218 00:17:43.218 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.218 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.218 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.475 { 00:17:43.475 "cntlid": 103, 00:17:43.475 "qid": 0, 00:17:43.475 "state": "enabled", 00:17:43.475 "thread": "nvmf_tgt_poll_group_000", 00:17:43.475 "listen_address": { 00:17:43.475 "trtype": "TCP", 00:17:43.475 "adrfam": "IPv4", 00:17:43.475 "traddr": "10.0.0.2", 00:17:43.475 "trsvcid": "4420" 00:17:43.475 }, 00:17:43.475 "peer_address": { 00:17:43.475 "trtype": "TCP", 00:17:43.475 "adrfam": "IPv4", 00:17:43.475 "traddr": "10.0.0.1", 00:17:43.475 "trsvcid": "35422" 00:17:43.475 }, 00:17:43.475 "auth": { 00:17:43.475 "state": "completed", 00:17:43.475 "digest": "sha512", 00:17:43.475 "dhgroup": "null" 00:17:43.475 } 00:17:43.475 } 00:17:43.475 ]' 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.475 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.733 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.733 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.733 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.733 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.733 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.990 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.921 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.178 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.436 00:17:45.436 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.436 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.436 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.693 { 00:17:45.693 "cntlid": 105, 00:17:45.693 "qid": 0, 00:17:45.693 "state": "enabled", 00:17:45.693 "thread": "nvmf_tgt_poll_group_000", 00:17:45.693 "listen_address": { 00:17:45.693 "trtype": "TCP", 00:17:45.693 "adrfam": "IPv4", 00:17:45.693 "traddr": "10.0.0.2", 00:17:45.693 "trsvcid": "4420" 00:17:45.693 }, 00:17:45.693 "peer_address": { 00:17:45.693 "trtype": "TCP", 00:17:45.693 "adrfam": "IPv4", 00:17:45.693 "traddr": "10.0.0.1", 00:17:45.693 "trsvcid": "35436" 00:17:45.693 }, 00:17:45.693 "auth": { 00:17:45.693 "state": "completed", 00:17:45.693 "digest": "sha512", 00:17:45.693 "dhgroup": "ffdhe2048" 00:17:45.693 } 00:17:45.693 } 00:17:45.693 ]' 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.693 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.951 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.951 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.951 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.951 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.951 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.208 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.141 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.399 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.656 00:17:47.656 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.656 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.656 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.914 { 00:17:47.914 "cntlid": 107, 00:17:47.914 "qid": 0, 00:17:47.914 "state": "enabled", 00:17:47.914 "thread": "nvmf_tgt_poll_group_000", 00:17:47.914 "listen_address": { 00:17:47.914 "trtype": "TCP", 00:17:47.914 "adrfam": "IPv4", 00:17:47.914 "traddr": "10.0.0.2", 00:17:47.914 "trsvcid": "4420" 00:17:47.914 }, 00:17:47.914 "peer_address": { 00:17:47.914 "trtype": "TCP", 00:17:47.914 "adrfam": "IPv4", 00:17:47.914 "traddr": "10.0.0.1", 00:17:47.914 "trsvcid": "35464" 00:17:47.914 }, 00:17:47.914 "auth": { 00:17:47.914 "state": "completed", 00:17:47.914 "digest": "sha512", 00:17:47.914 "dhgroup": "ffdhe2048" 00:17:47.914 } 00:17:47.914 } 00:17:47.914 ]' 00:17:47.914 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.172 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.429 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.363 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.621 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.879 00:17:49.879 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.879 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.879 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.137 { 00:17:50.137 "cntlid": 109, 00:17:50.137 "qid": 0, 00:17:50.137 "state": "enabled", 00:17:50.137 "thread": "nvmf_tgt_poll_group_000", 00:17:50.137 "listen_address": { 00:17:50.137 "trtype": "TCP", 00:17:50.137 "adrfam": "IPv4", 00:17:50.137 "traddr": "10.0.0.2", 00:17:50.137 "trsvcid": "4420" 00:17:50.137 }, 00:17:50.137 "peer_address": { 00:17:50.137 "trtype": "TCP", 00:17:50.137 "adrfam": "IPv4", 00:17:50.137 "traddr": "10.0.0.1", 00:17:50.137 "trsvcid": "43348" 00:17:50.137 }, 00:17:50.137 "auth": { 00:17:50.137 "state": "completed", 00:17:50.137 "digest": "sha512", 00:17:50.137 "dhgroup": "ffdhe2048" 00:17:50.137 } 00:17:50.137 } 00:17:50.137 ]' 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.137 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.395 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.395 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.395 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.395 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.395 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.654 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.586 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.844 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.102 00:17:52.102 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.102 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.102 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.359 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.359 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.359 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.359 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.359 { 00:17:52.359 "cntlid": 111, 00:17:52.359 "qid": 0, 00:17:52.359 "state": "enabled", 00:17:52.359 "thread": "nvmf_tgt_poll_group_000", 00:17:52.359 "listen_address": { 00:17:52.359 "trtype": "TCP", 00:17:52.359 "adrfam": "IPv4", 00:17:52.359 "traddr": "10.0.0.2", 00:17:52.359 "trsvcid": "4420" 00:17:52.359 }, 00:17:52.359 "peer_address": { 00:17:52.359 "trtype": "TCP", 00:17:52.359 "adrfam": "IPv4", 00:17:52.359 "traddr": "10.0.0.1", 00:17:52.359 "trsvcid": "43372" 00:17:52.359 }, 00:17:52.359 "auth": { 00:17:52.359 "state": "completed", 00:17:52.359 "digest": "sha512", 00:17:52.359 "dhgroup": "ffdhe2048" 00:17:52.359 } 00:17:52.359 } 00:17:52.359 ]' 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.359 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.617 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.617 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.617 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.875 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.807 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.064 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.321 00:17:54.321 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.321 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.321 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.578 { 00:17:54.578 "cntlid": 113, 00:17:54.578 "qid": 0, 00:17:54.578 "state": "enabled", 00:17:54.578 "thread": "nvmf_tgt_poll_group_000", 00:17:54.578 "listen_address": { 00:17:54.578 "trtype": "TCP", 00:17:54.578 "adrfam": "IPv4", 00:17:54.578 "traddr": "10.0.0.2", 00:17:54.578 "trsvcid": "4420" 00:17:54.578 }, 00:17:54.578 "peer_address": { 00:17:54.578 "trtype": "TCP", 00:17:54.578 "adrfam": "IPv4", 00:17:54.578 "traddr": "10.0.0.1", 00:17:54.578 "trsvcid": "43394" 00:17:54.578 }, 00:17:54.578 "auth": { 00:17:54.578 "state": "completed", 00:17:54.578 "digest": "sha512", 00:17:54.578 "dhgroup": "ffdhe3072" 00:17:54.578 } 00:17:54.578 } 00:17:54.578 ]' 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.578 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.143 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.075 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.334 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.591 00:17:56.591 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.591 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.591 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.863 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.864 { 00:17:56.864 "cntlid": 115, 00:17:56.864 "qid": 0, 00:17:56.864 "state": "enabled", 00:17:56.864 "thread": "nvmf_tgt_poll_group_000", 00:17:56.864 "listen_address": { 00:17:56.864 "trtype": "TCP", 00:17:56.864 "adrfam": "IPv4", 00:17:56.864 "traddr": "10.0.0.2", 00:17:56.864 "trsvcid": "4420" 00:17:56.864 }, 00:17:56.864 "peer_address": { 00:17:56.864 "trtype": "TCP", 00:17:56.864 "adrfam": "IPv4", 00:17:56.864 "traddr": "10.0.0.1", 00:17:56.864 "trsvcid": "43408" 00:17:56.864 }, 00:17:56.864 "auth": { 00:17:56.864 "state": "completed", 00:17:56.864 "digest": "sha512", 00:17:56.864 "dhgroup": "ffdhe3072" 00:17:56.864 } 00:17:56.864 } 00:17:56.864 ]' 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.864 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.124 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.056 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.314 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.571 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.571 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.571 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.827 00:17:58.827 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.827 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.828 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.085 { 00:17:59.085 "cntlid": 117, 00:17:59.085 "qid": 0, 00:17:59.085 "state": "enabled", 00:17:59.085 "thread": "nvmf_tgt_poll_group_000", 00:17:59.085 "listen_address": { 00:17:59.085 "trtype": "TCP", 00:17:59.085 "adrfam": "IPv4", 00:17:59.085 "traddr": "10.0.0.2", 00:17:59.085 "trsvcid": "4420" 00:17:59.085 }, 00:17:59.085 "peer_address": { 00:17:59.085 "trtype": "TCP", 00:17:59.085 "adrfam": "IPv4", 00:17:59.085 "traddr": "10.0.0.1", 00:17:59.085 "trsvcid": "42588" 00:17:59.085 }, 00:17:59.085 "auth": { 00:17:59.085 "state": "completed", 00:17:59.085 "digest": "sha512", 00:17:59.085 "dhgroup": "ffdhe3072" 00:17:59.085 } 00:17:59.085 } 00:17:59.085 ]' 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.085 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.343 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.276 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.534 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.099 00:18:01.099 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.099 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.099 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.099 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.357 { 00:18:01.357 "cntlid": 119, 00:18:01.357 "qid": 0, 00:18:01.357 "state": "enabled", 00:18:01.357 "thread": "nvmf_tgt_poll_group_000", 00:18:01.357 "listen_address": { 00:18:01.357 "trtype": "TCP", 00:18:01.357 "adrfam": "IPv4", 00:18:01.357 "traddr": "10.0.0.2", 00:18:01.357 "trsvcid": "4420" 00:18:01.357 }, 00:18:01.357 "peer_address": { 00:18:01.357 "trtype": "TCP", 00:18:01.357 "adrfam": "IPv4", 00:18:01.357 "traddr": "10.0.0.1", 00:18:01.357 "trsvcid": "42620" 00:18:01.357 }, 00:18:01.357 "auth": { 00:18:01.357 "state": "completed", 00:18:01.357 "digest": "sha512", 00:18:01.357 "dhgroup": "ffdhe3072" 00:18:01.357 } 00:18:01.357 } 00:18:01.357 ]' 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.357 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.615 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.547 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.805 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.370 00:18:03.370 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.370 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.370 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.370 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.370 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.370 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.370 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.628 { 00:18:03.628 "cntlid": 121, 00:18:03.628 "qid": 0, 00:18:03.628 "state": "enabled", 00:18:03.628 "thread": "nvmf_tgt_poll_group_000", 00:18:03.628 "listen_address": { 00:18:03.628 "trtype": "TCP", 00:18:03.628 "adrfam": "IPv4", 00:18:03.628 "traddr": "10.0.0.2", 00:18:03.628 "trsvcid": "4420" 00:18:03.628 }, 00:18:03.628 "peer_address": { 00:18:03.628 "trtype": "TCP", 00:18:03.628 "adrfam": "IPv4", 00:18:03.628 "traddr": "10.0.0.1", 00:18:03.628 "trsvcid": "42658" 00:18:03.628 }, 00:18:03.628 "auth": { 00:18:03.628 "state": "completed", 00:18:03.628 "digest": "sha512", 00:18:03.628 "dhgroup": "ffdhe4096" 00:18:03.628 } 00:18:03.628 } 00:18:03.628 ]' 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.628 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.886 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.818 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.076 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.642 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.642 { 00:18:05.642 "cntlid": 123, 00:18:05.642 "qid": 0, 00:18:05.642 "state": "enabled", 00:18:05.642 "thread": "nvmf_tgt_poll_group_000", 00:18:05.642 "listen_address": { 00:18:05.642 "trtype": "TCP", 00:18:05.642 "adrfam": "IPv4", 00:18:05.642 "traddr": "10.0.0.2", 00:18:05.642 "trsvcid": "4420" 00:18:05.642 }, 00:18:05.642 "peer_address": { 00:18:05.642 "trtype": "TCP", 00:18:05.642 "adrfam": "IPv4", 00:18:05.642 "traddr": "10.0.0.1", 00:18:05.642 "trsvcid": "42682" 00:18:05.642 }, 00:18:05.642 "auth": { 00:18:05.642 "state": "completed", 00:18:05.642 "digest": "sha512", 00:18:05.642 "dhgroup": "ffdhe4096" 00:18:05.642 } 00:18:05.642 } 00:18:05.642 ]' 00:18:05.642 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.900 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.158 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.091 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.349 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.350 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.350 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.350 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.350 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.607 00:18:07.607 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.607 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.607 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.865 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.865 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.866 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.866 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.866 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.866 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.866 { 00:18:07.866 "cntlid": 125, 00:18:07.866 "qid": 0, 00:18:07.866 "state": "enabled", 00:18:07.866 "thread": "nvmf_tgt_poll_group_000", 00:18:07.866 "listen_address": { 00:18:07.866 "trtype": "TCP", 00:18:07.866 "adrfam": "IPv4", 00:18:07.866 "traddr": "10.0.0.2", 00:18:07.866 "trsvcid": "4420" 00:18:07.866 }, 00:18:07.866 "peer_address": { 00:18:07.866 "trtype": "TCP", 00:18:07.866 "adrfam": "IPv4", 00:18:07.866 "traddr": "10.0.0.1", 00:18:07.866 "trsvcid": "42706" 00:18:07.866 }, 00:18:07.866 "auth": { 00:18:07.866 "state": "completed", 00:18:07.866 "digest": "sha512", 00:18:07.866 "dhgroup": "ffdhe4096" 00:18:07.866 } 00:18:07.866 } 00:18:07.866 ]' 00:18:07.866 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.124 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.382 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.315 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.572 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:09.572 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.573 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.830 00:18:09.830 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.830 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.830 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.088 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.088 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.088 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.088 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.088 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.088 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.089 { 00:18:10.089 "cntlid": 127, 00:18:10.089 "qid": 0, 00:18:10.089 "state": "enabled", 00:18:10.089 "thread": "nvmf_tgt_poll_group_000", 00:18:10.089 "listen_address": { 00:18:10.089 "trtype": "TCP", 00:18:10.089 "adrfam": "IPv4", 00:18:10.089 "traddr": "10.0.0.2", 00:18:10.089 "trsvcid": "4420" 00:18:10.089 }, 00:18:10.089 "peer_address": { 00:18:10.089 "trtype": "TCP", 00:18:10.089 "adrfam": "IPv4", 00:18:10.089 "traddr": "10.0.0.1", 00:18:10.089 "trsvcid": "46922" 00:18:10.089 }, 00:18:10.089 "auth": { 00:18:10.089 "state": "completed", 00:18:10.089 "digest": "sha512", 00:18:10.089 "dhgroup": "ffdhe4096" 00:18:10.089 } 00:18:10.089 } 00:18:10.089 ]' 00:18:10.089 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.347 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.605 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.587 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.898 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.486 00:18:12.486 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.486 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.486 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.743 { 00:18:12.743 "cntlid": 129, 00:18:12.743 "qid": 0, 00:18:12.743 "state": "enabled", 00:18:12.743 "thread": "nvmf_tgt_poll_group_000", 00:18:12.743 "listen_address": { 00:18:12.743 "trtype": "TCP", 00:18:12.743 "adrfam": "IPv4", 00:18:12.743 "traddr": "10.0.0.2", 00:18:12.743 "trsvcid": "4420" 00:18:12.743 }, 00:18:12.743 "peer_address": { 00:18:12.743 "trtype": "TCP", 00:18:12.743 "adrfam": "IPv4", 00:18:12.743 "traddr": "10.0.0.1", 00:18:12.743 "trsvcid": "46962" 00:18:12.743 }, 00:18:12.743 "auth": { 00:18:12.743 "state": "completed", 00:18:12.743 "digest": "sha512", 00:18:12.743 "dhgroup": "ffdhe6144" 00:18:12.743 } 00:18:12.743 } 00:18:12.743 ]' 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.743 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.000 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.374 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.374 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.375 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.940 00:18:14.940 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.940 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.940 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.198 { 00:18:15.198 "cntlid": 131, 00:18:15.198 "qid": 0, 00:18:15.198 "state": "enabled", 00:18:15.198 "thread": "nvmf_tgt_poll_group_000", 00:18:15.198 "listen_address": { 00:18:15.198 "trtype": "TCP", 00:18:15.198 "adrfam": "IPv4", 00:18:15.198 "traddr": "10.0.0.2", 00:18:15.198 "trsvcid": "4420" 00:18:15.198 }, 00:18:15.198 "peer_address": { 00:18:15.198 "trtype": "TCP", 00:18:15.198 "adrfam": "IPv4", 00:18:15.198 "traddr": "10.0.0.1", 00:18:15.198 "trsvcid": "46998" 00:18:15.198 }, 00:18:15.198 "auth": { 00:18:15.198 "state": "completed", 00:18:15.198 "digest": "sha512", 00:18:15.198 "dhgroup": "ffdhe6144" 00:18:15.198 } 00:18:15.198 } 00:18:15.198 ]' 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.198 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.456 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.456 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.456 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.714 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.647 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.904 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.470 00:18:17.470 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.470 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.470 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.728 { 00:18:17.728 "cntlid": 133, 00:18:17.728 "qid": 0, 00:18:17.728 "state": "enabled", 00:18:17.728 "thread": "nvmf_tgt_poll_group_000", 00:18:17.728 "listen_address": { 00:18:17.728 "trtype": "TCP", 00:18:17.728 "adrfam": "IPv4", 00:18:17.728 "traddr": "10.0.0.2", 00:18:17.728 "trsvcid": "4420" 00:18:17.728 }, 00:18:17.728 "peer_address": { 00:18:17.728 "trtype": "TCP", 00:18:17.728 "adrfam": "IPv4", 00:18:17.728 "traddr": "10.0.0.1", 00:18:17.728 "trsvcid": "47014" 00:18:17.728 }, 00:18:17.728 "auth": { 00:18:17.728 "state": "completed", 00:18:17.728 "digest": "sha512", 00:18:17.728 "dhgroup": "ffdhe6144" 00:18:17.728 } 00:18:17.728 } 00:18:17.728 ]' 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.728 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.986 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.919 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.177 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.742 00:18:19.742 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.742 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.742 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.000 { 00:18:20.000 "cntlid": 135, 00:18:20.000 "qid": 0, 00:18:20.000 "state": "enabled", 00:18:20.000 "thread": "nvmf_tgt_poll_group_000", 00:18:20.000 "listen_address": { 00:18:20.000 "trtype": "TCP", 00:18:20.000 "adrfam": "IPv4", 00:18:20.000 "traddr": "10.0.0.2", 00:18:20.000 "trsvcid": "4420" 00:18:20.000 }, 00:18:20.000 "peer_address": { 00:18:20.000 "trtype": "TCP", 00:18:20.000 "adrfam": "IPv4", 00:18:20.000 "traddr": "10.0.0.1", 00:18:20.000 "trsvcid": "43976" 00:18:20.000 }, 00:18:20.000 "auth": { 00:18:20.000 "state": "completed", 00:18:20.000 "digest": "sha512", 00:18:20.000 "dhgroup": "ffdhe6144" 00:18:20.000 } 00:18:20.000 } 00:18:20.000 ]' 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.000 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.258 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.258 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.258 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.515 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:18:21.452 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.452 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:21.452 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.452 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.452 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.452 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.452 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.452 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.452 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.710 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.643 00:18:22.643 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.643 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.643 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.901 { 00:18:22.901 "cntlid": 137, 00:18:22.901 "qid": 0, 00:18:22.901 "state": "enabled", 00:18:22.901 "thread": "nvmf_tgt_poll_group_000", 00:18:22.901 "listen_address": { 00:18:22.901 "trtype": "TCP", 00:18:22.901 "adrfam": "IPv4", 00:18:22.901 "traddr": "10.0.0.2", 00:18:22.901 "trsvcid": "4420" 00:18:22.901 }, 00:18:22.901 "peer_address": { 00:18:22.901 "trtype": "TCP", 00:18:22.901 "adrfam": "IPv4", 00:18:22.901 "traddr": "10.0.0.1", 00:18:22.901 "trsvcid": "44002" 00:18:22.901 }, 00:18:22.901 "auth": { 00:18:22.901 "state": "completed", 00:18:22.901 "digest": "sha512", 00:18:22.901 "dhgroup": "ffdhe8192" 00:18:22.901 } 00:18:22.901 } 00:18:22.901 ]' 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.901 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.159 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.091 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.349 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.282 00:18:25.282 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.282 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.282 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.540 { 00:18:25.540 "cntlid": 139, 00:18:25.540 "qid": 0, 00:18:25.540 "state": "enabled", 00:18:25.540 "thread": "nvmf_tgt_poll_group_000", 00:18:25.540 "listen_address": { 00:18:25.540 "trtype": "TCP", 00:18:25.540 "adrfam": "IPv4", 00:18:25.540 "traddr": "10.0.0.2", 00:18:25.540 "trsvcid": "4420" 00:18:25.540 }, 00:18:25.540 "peer_address": { 00:18:25.540 "trtype": "TCP", 00:18:25.540 "adrfam": "IPv4", 00:18:25.540 "traddr": "10.0.0.1", 00:18:25.540 "trsvcid": "44036" 00:18:25.540 }, 00:18:25.540 "auth": { 00:18:25.540 "state": "completed", 00:18:25.540 "digest": "sha512", 00:18:25.540 "dhgroup": "ffdhe8192" 00:18:25.540 } 00:18:25.540 } 00:18:25.540 ]' 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.540 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.798 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.798 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.798 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.056 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:Y2FhOTFlM2NmYWIwMTE2MjBiMjU1Y2ZjYzY4YmQ2Y2N4iB+b: --dhchap-ctrl-secret DHHC-1:02:YmNlZGQxYjhjMDJlNjNmYzk3NWNlMTVhNjIwNmIxMmU4NWE3Yzg0M2E2MmE4N2Uybgew2w==: 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.989 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.247 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.180 00:18:28.180 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.180 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.180 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.437 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.437 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.437 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.438 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.438 { 00:18:28.438 "cntlid": 141, 00:18:28.438 "qid": 0, 00:18:28.438 "state": "enabled", 00:18:28.438 "thread": "nvmf_tgt_poll_group_000", 00:18:28.438 "listen_address": { 00:18:28.438 "trtype": "TCP", 00:18:28.438 "adrfam": "IPv4", 00:18:28.438 "traddr": "10.0.0.2", 00:18:28.438 "trsvcid": "4420" 00:18:28.438 }, 00:18:28.438 "peer_address": { 00:18:28.438 "trtype": "TCP", 00:18:28.438 "adrfam": "IPv4", 00:18:28.438 "traddr": "10.0.0.1", 00:18:28.438 "trsvcid": "44062" 00:18:28.438 }, 00:18:28.438 "auth": { 00:18:28.438 "state": "completed", 00:18:28.438 "digest": "sha512", 00:18:28.438 "dhgroup": "ffdhe8192" 00:18:28.438 } 00:18:28.438 } 00:18:28.438 ]' 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.438 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.695 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:ZjlhZDY0ZjQzMDZlNzA3NGQyYzFhMjZiMTJlYTE4YjQyZDE1MTQ1YmJkZjllYzcxwuIwkA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzBhOGNmZjNjOWZjMDlkOGNjYzg1MDdmZTdkNWH1pEQf: 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.627 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.885 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.819 00:18:30.819 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.819 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.819 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.078 { 00:18:31.078 "cntlid": 143, 00:18:31.078 "qid": 0, 00:18:31.078 "state": "enabled", 00:18:31.078 "thread": "nvmf_tgt_poll_group_000", 00:18:31.078 "listen_address": { 00:18:31.078 "trtype": "TCP", 00:18:31.078 "adrfam": "IPv4", 00:18:31.078 "traddr": "10.0.0.2", 00:18:31.078 "trsvcid": "4420" 00:18:31.078 }, 00:18:31.078 "peer_address": { 00:18:31.078 "trtype": "TCP", 00:18:31.078 "adrfam": "IPv4", 00:18:31.078 "traddr": "10.0.0.1", 00:18:31.078 "trsvcid": "53470" 00:18:31.078 }, 00:18:31.078 "auth": { 00:18:31.078 "state": "completed", 00:18:31.078 "digest": "sha512", 00:18:31.078 "dhgroup": "ffdhe8192" 00:18:31.078 } 00:18:31.078 } 00:18:31.078 ]' 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.078 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.336 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.336 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.336 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.594 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:18:32.526 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.526 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:32.526 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.527 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.783 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.713 00:18:33.713 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.713 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.713 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.970 { 00:18:33.970 "cntlid": 145, 00:18:33.970 "qid": 0, 00:18:33.970 "state": "enabled", 00:18:33.970 "thread": "nvmf_tgt_poll_group_000", 00:18:33.970 "listen_address": { 00:18:33.970 "trtype": "TCP", 00:18:33.970 "adrfam": "IPv4", 00:18:33.970 "traddr": "10.0.0.2", 00:18:33.970 "trsvcid": "4420" 00:18:33.970 }, 00:18:33.970 "peer_address": { 00:18:33.970 "trtype": "TCP", 00:18:33.970 "adrfam": "IPv4", 00:18:33.970 "traddr": "10.0.0.1", 00:18:33.970 "trsvcid": "53496" 00:18:33.970 }, 00:18:33.970 "auth": { 00:18:33.970 "state": "completed", 00:18:33.970 "digest": "sha512", 00:18:33.970 "dhgroup": "ffdhe8192" 00:18:33.970 } 00:18:33.970 } 00:18:33.970 ]' 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.970 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.228 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:ZjI0NzUyZmQ3Mzg0Mzg1ZmE2MzBkNTg5MDY0NTVkY2IxODA5NzZhNWJjYjcxNjk3KSWxWw==: --dhchap-ctrl-secret DHHC-1:03:N2ZiY2VmMWFjNTExYzA0ZjFiZmQ0ZjMyOWM3MWM5ODk3OGM5Y2YxYzI1YmI1MDJkNTAwZGNiMDA5NTg1MTFiOUL2wmM=: 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.161 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:35.419 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.351 request: 00:18:36.351 { 00:18:36.351 "name": "nvme0", 00:18:36.351 "trtype": "tcp", 00:18:36.351 "traddr": "10.0.0.2", 00:18:36.351 "adrfam": "ipv4", 00:18:36.351 "trsvcid": "4420", 00:18:36.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:36.351 "prchk_reftag": false, 00:18:36.351 "prchk_guard": false, 00:18:36.351 "hdgst": false, 00:18:36.351 "ddgst": false, 00:18:36.351 "dhchap_key": "key2", 00:18:36.351 "method": "bdev_nvme_attach_controller", 00:18:36.351 "req_id": 1 00:18:36.351 } 00:18:36.351 Got JSON-RPC error response 00:18:36.351 response: 00:18:36.351 { 00:18:36.351 "code": -5, 00:18:36.351 "message": "Input/output error" 00:18:36.351 } 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.351 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.915 request: 00:18:36.915 { 00:18:36.915 "name": "nvme0", 00:18:36.915 "trtype": "tcp", 00:18:36.915 "traddr": "10.0.0.2", 00:18:36.915 "adrfam": "ipv4", 00:18:36.915 "trsvcid": "4420", 00:18:36.915 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:36.915 "prchk_reftag": false, 00:18:36.915 "prchk_guard": false, 00:18:36.915 "hdgst": false, 00:18:36.915 "ddgst": false, 00:18:36.915 "dhchap_key": "key1", 00:18:36.915 "dhchap_ctrlr_key": "ckey2", 00:18:36.915 "method": "bdev_nvme_attach_controller", 00:18:36.915 "req_id": 1 00:18:36.915 } 00:18:36.915 Got JSON-RPC error response 00:18:36.915 response: 00:18:36.915 { 00:18:36.915 "code": -5, 00:18:36.915 "message": "Input/output error" 00:18:36.915 } 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:36.915 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.916 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:36.916 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.916 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:36.916 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.916 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.916 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.849 request: 00:18:37.849 { 00:18:37.849 "name": "nvme0", 00:18:37.849 "trtype": "tcp", 00:18:37.849 "traddr": "10.0.0.2", 00:18:37.849 "adrfam": "ipv4", 00:18:37.849 "trsvcid": "4420", 00:18:37.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:37.849 "prchk_reftag": false, 00:18:37.849 "prchk_guard": false, 00:18:37.849 "hdgst": false, 00:18:37.849 "ddgst": false, 00:18:37.849 "dhchap_key": "key1", 00:18:37.849 "dhchap_ctrlr_key": "ckey1", 00:18:37.849 "method": "bdev_nvme_attach_controller", 00:18:37.849 "req_id": 1 00:18:37.849 } 00:18:37.849 Got JSON-RPC error response 00:18:37.849 response: 00:18:37.849 { 00:18:37.849 "code": -5, 00:18:37.849 "message": "Input/output error" 00:18:37.849 } 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 512937 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 512937 ']' 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 512937 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 512937 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 512937' 00:18:37.849 killing process with pid 512937 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 512937 00:18:37.849 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 512937 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=536153 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 536153 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 536153 ']' 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.106 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.364 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.364 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:38.364 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.364 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.364 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 536153 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 536153 ']' 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.621 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.878 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.809 00:18:39.809 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.809 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.809 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.066 { 00:18:40.066 "cntlid": 1, 00:18:40.066 "qid": 0, 00:18:40.066 "state": "enabled", 00:18:40.066 "thread": "nvmf_tgt_poll_group_000", 00:18:40.066 "listen_address": { 00:18:40.066 "trtype": "TCP", 00:18:40.066 "adrfam": "IPv4", 00:18:40.066 "traddr": "10.0.0.2", 00:18:40.066 "trsvcid": "4420" 00:18:40.066 }, 00:18:40.066 "peer_address": { 00:18:40.066 "trtype": "TCP", 00:18:40.066 "adrfam": "IPv4", 00:18:40.066 "traddr": "10.0.0.1", 00:18:40.066 "trsvcid": "56756" 00:18:40.066 }, 00:18:40.066 "auth": { 00:18:40.066 "state": "completed", 00:18:40.066 "digest": "sha512", 00:18:40.066 "dhgroup": "ffdhe8192" 00:18:40.066 } 00:18:40.066 } 00:18:40.066 ]' 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.066 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.323 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:ZWZkZGRlZDRiYzc1ZDQwMjg3MTdhNDBlZThiYTc4YjVjZDFlODNhZGJiYjM1YzhhM2FiODg0Y2U4NDUyN2U1NRYNKM4=: 00:18:41.254 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.512 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:41.512 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.512 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:41.512 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:41.769 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.769 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:41.769 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.769 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:41.769 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:41.770 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:41.770 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:41.770 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.770 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.027 request: 00:18:42.027 { 00:18:42.027 "name": "nvme0", 00:18:42.027 "trtype": "tcp", 00:18:42.027 "traddr": "10.0.0.2", 00:18:42.027 "adrfam": "ipv4", 00:18:42.027 "trsvcid": "4420", 00:18:42.027 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:42.027 "prchk_reftag": false, 00:18:42.027 "prchk_guard": false, 00:18:42.027 "hdgst": false, 00:18:42.027 "ddgst": false, 00:18:42.027 "dhchap_key": "key3", 00:18:42.027 "method": "bdev_nvme_attach_controller", 00:18:42.027 "req_id": 1 00:18:42.027 } 00:18:42.027 Got JSON-RPC error response 00:18:42.027 response: 00:18:42.027 { 00:18:42.027 "code": -5, 00:18:42.027 "message": "Input/output error" 00:18:42.027 } 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:42.027 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.285 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.285 request: 00:18:42.285 { 00:18:42.285 "name": "nvme0", 00:18:42.285 "trtype": "tcp", 00:18:42.285 "traddr": "10.0.0.2", 00:18:42.285 "adrfam": "ipv4", 00:18:42.285 "trsvcid": "4420", 00:18:42.285 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:42.285 "prchk_reftag": false, 00:18:42.285 "prchk_guard": false, 00:18:42.285 "hdgst": false, 00:18:42.285 "ddgst": false, 00:18:42.285 "dhchap_key": "key3", 00:18:42.285 "method": "bdev_nvme_attach_controller", 00:18:42.285 "req_id": 1 00:18:42.285 } 00:18:42.285 Got JSON-RPC error response 00:18:42.285 response: 00:18:42.285 { 00:18:42.285 "code": -5, 00:18:42.285 "message": "Input/output error" 00:18:42.285 } 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.543 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.801 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.059 request: 00:18:43.059 { 00:18:43.059 "name": "nvme0", 00:18:43.059 "trtype": "tcp", 00:18:43.059 "traddr": "10.0.0.2", 00:18:43.059 "adrfam": "ipv4", 00:18:43.059 "trsvcid": "4420", 00:18:43.059 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:18:43.059 "prchk_reftag": false, 00:18:43.059 "prchk_guard": false, 00:18:43.059 "hdgst": false, 00:18:43.059 "ddgst": false, 00:18:43.059 "dhchap_key": "key0", 00:18:43.059 "dhchap_ctrlr_key": "key1", 00:18:43.059 "method": "bdev_nvme_attach_controller", 00:18:43.059 "req_id": 1 00:18:43.059 } 00:18:43.059 Got JSON-RPC error response 00:18:43.059 response: 00:18:43.059 { 00:18:43.059 "code": -5, 00:18:43.059 "message": "Input/output error" 00:18:43.059 } 00:18:43.059 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:43.059 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.059 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:43.059 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.059 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.059 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.317 00:18:43.317 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:43.317 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:43.317 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.574 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.574 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.574 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 513013 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 513013 ']' 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 513013 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 513013 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 513013' 00:18:43.832 killing process with pid 513013 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 513013 00:18:43.832 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 513013 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.397 rmmod nvme_tcp 00:18:44.397 rmmod nvme_fabrics 00:18:44.397 rmmod nvme_keyring 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 536153 ']' 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 536153 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 536153 ']' 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 536153 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 536153 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 536153' 00:18:44.397 killing process with pid 536153 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 536153 00:18:44.397 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 536153 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.655 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.558 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.558 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ESJ /tmp/spdk.key-sha256.cfi /tmp/spdk.key-sha384.YaY /tmp/spdk.key-sha512.Wrg /tmp/spdk.key-sha512.x0v /tmp/spdk.key-sha384.Wzk /tmp/spdk.key-sha256.0Ik '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:46.558 00:18:46.558 real 3m13.018s 00:18:46.558 user 7m27.875s 00:18:46.558 sys 0m25.134s 00:18:46.558 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.558 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.558 ************************************ 00:18:46.558 END TEST nvmf_auth_target 00:18:46.558 ************************************ 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.816 ************************************ 00:18:46.816 START TEST nvmf_bdevio_no_huge 00:18:46.816 ************************************ 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:46.816 * Looking for test storage... 00:18:46.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.816 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.817 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.348 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:49.349 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:49.349 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:49.349 Found net devices under 0000:82:00.0: cvl_0_0 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:49.349 Found net devices under 0000:82:00.1: cvl_0_1 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:49.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:18:49.349 00:18:49.349 --- 10.0.0.2 ping statistics --- 00:18:49.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.349 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:18:49.349 00:18:49.349 --- 10.0.0.1 ping statistics --- 00:18:49.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.349 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=538911 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 538911 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 538911 ']' 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.349 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.349 [2024-07-25 09:33:21.753508] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:49.349 [2024-07-25 09:33:21.753593] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:49.349 [2024-07-25 09:33:21.833667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.349 [2024-07-25 09:33:21.945063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.349 [2024-07-25 09:33:21.945109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.349 [2024-07-25 09:33:21.945123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.349 [2024-07-25 09:33:21.945135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.349 [2024-07-25 09:33:21.945144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.349 [2024-07-25 09:33:21.945230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:49.349 [2024-07-25 09:33:21.945257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:49.349 [2024-07-25 09:33:21.946378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:49.349 [2024-07-25 09:33:21.946388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.350 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.350 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:49.350 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.350 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.350 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.608 [2024-07-25 09:33:22.098788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.608 Malloc0 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.608 [2024-07-25 09:33:22.137372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:49.608 { 00:18:49.608 "params": { 00:18:49.608 "name": "Nvme$subsystem", 00:18:49.608 "trtype": "$TEST_TRANSPORT", 00:18:49.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.608 "adrfam": "ipv4", 00:18:49.608 "trsvcid": "$NVMF_PORT", 00:18:49.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.608 "hdgst": ${hdgst:-false}, 00:18:49.608 "ddgst": ${ddgst:-false} 00:18:49.608 }, 00:18:49.608 "method": "bdev_nvme_attach_controller" 00:18:49.608 } 00:18:49.608 EOF 00:18:49.608 )") 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:49.608 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:49.608 "params": { 00:18:49.608 "name": "Nvme1", 00:18:49.608 "trtype": "tcp", 00:18:49.608 "traddr": "10.0.0.2", 00:18:49.608 "adrfam": "ipv4", 00:18:49.608 "trsvcid": "4420", 00:18:49.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.608 "hdgst": false, 00:18:49.608 "ddgst": false 00:18:49.608 }, 00:18:49.608 "method": "bdev_nvme_attach_controller" 00:18:49.608 }' 00:18:49.608 [2024-07-25 09:33:22.185159] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:49.608 [2024-07-25 09:33:22.185241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid538950 ] 00:18:49.608 [2024-07-25 09:33:22.250960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.867 [2024-07-25 09:33:22.367407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.867 [2024-07-25 09:33:22.367433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.867 [2024-07-25 09:33:22.367437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.125 I/O targets: 00:18:50.125 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:50.125 00:18:50.125 00:18:50.125 CUnit - A unit testing framework for C - Version 2.1-3 00:18:50.125 http://cunit.sourceforge.net/ 00:18:50.125 00:18:50.125 00:18:50.125 Suite: bdevio tests on: Nvme1n1 00:18:50.125 Test: blockdev write read block ...passed 00:18:50.125 Test: blockdev write zeroes read block ...passed 00:18:50.125 Test: blockdev write zeroes read no split ...passed 00:18:50.125 Test: blockdev write zeroes read split ...passed 00:18:50.125 Test: blockdev write zeroes read split partial ...passed 00:18:50.125 Test: blockdev reset ...[2024-07-25 09:33:22.779875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.125 [2024-07-25 09:33:22.779990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252fb0 (9): Bad file descriptor 00:18:50.125 [2024-07-25 09:33:22.791551] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.125 passed 00:18:50.125 Test: blockdev write read 8 blocks ...passed 00:18:50.125 Test: blockdev write read size > 128k ...passed 00:18:50.125 Test: blockdev write read invalid size ...passed 00:18:50.383 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:50.383 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:50.383 Test: blockdev write read max offset ...passed 00:18:50.383 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:50.383 Test: blockdev writev readv 8 blocks ...passed 00:18:50.383 Test: blockdev writev readv 30 x 1block ...passed 00:18:50.383 Test: blockdev writev readv block ...passed 00:18:50.383 Test: blockdev writev readv size > 128k ...passed 00:18:50.383 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:50.383 Test: blockdev comparev and writev ...[2024-07-25 09:33:23.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.004951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.004977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.004995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.005347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.005381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.005406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.005423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.005772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.005797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.005820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.005836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.006209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.006234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.006256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.383 [2024-07-25 09:33:23.006273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.383 passed 00:18:50.383 Test: blockdev nvme passthru rw ...passed 00:18:50.383 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:33:23.088602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.383 [2024-07-25 09:33:23.088631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.088775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.383 [2024-07-25 09:33:23.088800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.088936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.383 [2024-07-25 09:33:23.088960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.383 [2024-07-25 09:33:23.089107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.383 [2024-07-25 09:33:23.089130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.383 passed 00:18:50.383 Test: blockdev nvme admin passthru ...passed 00:18:50.641 Test: blockdev copy ...passed 00:18:50.641 00:18:50.641 Run Summary: Type Total Ran Passed Failed Inactive 00:18:50.641 suites 1 1 n/a 0 0 00:18:50.641 tests 23 23 23 0 0 00:18:50.641 asserts 152 152 152 0 n/a 00:18:50.641 00:18:50.641 Elapsed time = 0.982 seconds 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.899 rmmod nvme_tcp 00:18:50.899 rmmod nvme_fabrics 00:18:50.899 rmmod nvme_keyring 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.899 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 538911 ']' 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 538911 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 538911 ']' 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 538911 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 538911 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 538911' 00:18:50.900 killing process with pid 538911 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 538911 00:18:50.900 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 538911 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.467 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.374 00:18:53.374 real 0m6.682s 00:18:53.374 user 0m10.813s 00:18:53.374 sys 0m2.625s 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.374 ************************************ 00:18:53.374 END TEST nvmf_bdevio_no_huge 00:18:53.374 ************************************ 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.374 ************************************ 00:18:53.374 START TEST nvmf_tls 00:18:53.374 ************************************ 00:18:53.374 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:53.633 * Looking for test storage... 00:18:53.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.633 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.634 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.533 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:55.534 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:55.534 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:55.534 Found net devices under 0000:82:00.0: cvl_0_0 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:55.534 Found net devices under 0000:82:00.1: cvl_0_1 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.534 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:18:55.792 00:18:55.792 --- 10.0.0.2 ping statistics --- 00:18:55.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.792 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:18:55.792 00:18:55.792 --- 10.0.0.1 ping statistics --- 00:18:55.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.792 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=541132 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 541132 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 541132 ']' 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.792 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.792 [2024-07-25 09:33:28.447264] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:55.792 [2024-07-25 09:33:28.447353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.792 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.792 [2024-07-25 09:33:28.513510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.050 [2024-07-25 09:33:28.624325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.050 [2024-07-25 09:33:28.624410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.050 [2024-07-25 09:33:28.624423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.050 [2024-07-25 09:33:28.624434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.050 [2024-07-25 09:33:28.624444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.050 [2024-07-25 09:33:28.624469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:56.050 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:56.308 true 00:18:56.308 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.308 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:56.566 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:56.566 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:56.566 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:56.823 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.823 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:57.081 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:57.081 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:57.081 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:57.339 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.339 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:57.597 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:57.597 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:57.597 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.597 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:57.854 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:57.854 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:57.854 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:58.112 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.112 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:58.369 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:58.369 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:58.369 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:58.626 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.626 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.U8H88AhJdZ 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.MmHETCEk7n 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.U8H88AhJdZ 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MmHETCEk7n 00:18:58.884 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:59.141 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:59.707 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.U8H88AhJdZ 00:18:59.707 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.U8H88AhJdZ 00:18:59.707 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.707 [2024-07-25 09:33:32.412682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.707 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:59.965 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.223 [2024-07-25 09:33:32.914050] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.223 [2024-07-25 09:33:32.914294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.223 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.481 malloc0 00:19:00.481 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.739 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.U8H88AhJdZ 00:19:00.996 [2024-07-25 09:33:33.660080] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.996 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.U8H88AhJdZ 00:19:00.996 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.188 Initializing NVMe Controllers 00:19:13.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:13.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:13.188 Initialization complete. Launching workers. 00:19:13.188 ======================================================== 00:19:13.188 Latency(us) 00:19:13.188 Device Information : IOPS MiB/s Average min max 00:19:13.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7715.46 30.14 8297.77 1357.01 9598.73 00:19:13.188 ======================================================== 00:19:13.188 Total : 7715.46 30.14 8297.77 1357.01 9598.73 00:19:13.188 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U8H88AhJdZ 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.U8H88AhJdZ' 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=542902 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 542902 /var/tmp/bdevperf.sock 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 542902 ']' 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.188 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.188 [2024-07-25 09:33:43.843415] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:13.188 [2024-07-25 09:33:43.843492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542902 ] 00:19:13.188 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.188 [2024-07-25 09:33:43.904675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.188 [2024-07-25 09:33:44.015626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.188 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.188 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.188 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.U8H88AhJdZ 00:19:13.188 [2024-07-25 09:33:44.353960] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.188 [2024-07-25 09:33:44.354082] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:13.188 TLSTESTn1 00:19:13.188 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:13.188 Running I/O for 10 seconds... 00:19:23.152 00:19:23.152 Latency(us) 00:19:23.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.152 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.152 Verification LBA range: start 0x0 length 0x2000 00:19:23.152 TLSTESTn1 : 10.03 3416.53 13.35 0.00 0.00 37387.52 8204.14 65244.73 00:19:23.152 =================================================================================================================== 00:19:23.152 Total : 3416.53 13.35 0.00 0.00 37387.52 8204.14 65244.73 00:19:23.152 0 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 542902 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 542902 ']' 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 542902 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 542902 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 542902' 00:19:23.152 killing process with pid 542902 00:19:23.152 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 542902 00:19:23.152 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.152 00:19:23.153 Latency(us) 00:19:23.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.153 =================================================================================================================== 00:19:23.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.153 [2024-07-25 09:33:54.660454] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 542902 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MmHETCEk7n 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MmHETCEk7n 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MmHETCEk7n 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MmHETCEk7n' 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=544220 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 544220 /var/tmp/bdevperf.sock 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 544220 ']' 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.153 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.153 [2024-07-25 09:33:54.976600] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:23.153 [2024-07-25 09:33:54.976695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544220 ] 00:19:23.153 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.153 [2024-07-25 09:33:55.033369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.153 [2024-07-25 09:33:55.135626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MmHETCEk7n 00:19:23.153 [2024-07-25 09:33:55.498090] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.153 [2024-07-25 09:33:55.498205] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:23.153 [2024-07-25 09:33:55.503476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.153 [2024-07-25 09:33:55.503934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114cf90 (107): Transport endpoint is not connected 00:19:23.153 [2024-07-25 09:33:55.504923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114cf90 (9): Bad file descriptor 00:19:23.153 [2024-07-25 09:33:55.505921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.153 [2024-07-25 09:33:55.505938] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.153 [2024-07-25 09:33:55.505969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.153 request: 00:19:23.153 { 00:19:23.153 "name": "TLSTEST", 00:19:23.153 "trtype": "tcp", 00:19:23.153 "traddr": "10.0.0.2", 00:19:23.153 "adrfam": "ipv4", 00:19:23.153 "trsvcid": "4420", 00:19:23.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.153 "prchk_reftag": false, 00:19:23.153 "prchk_guard": false, 00:19:23.153 "hdgst": false, 00:19:23.153 "ddgst": false, 00:19:23.153 "psk": "/tmp/tmp.MmHETCEk7n", 00:19:23.153 "method": "bdev_nvme_attach_controller", 00:19:23.153 "req_id": 1 00:19:23.153 } 00:19:23.153 Got JSON-RPC error response 00:19:23.153 response: 00:19:23.153 { 00:19:23.153 "code": -5, 00:19:23.153 "message": "Input/output error" 00:19:23.153 } 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 544220 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 544220 ']' 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 544220 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544220 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544220' 00:19:23.153 killing process with pid 544220 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 544220 00:19:23.153 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.153 00:19:23.153 Latency(us) 00:19:23.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.153 =================================================================================================================== 00:19:23.153 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.153 [2024-07-25 09:33:55.552818] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 544220 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.U8H88AhJdZ 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.U8H88AhJdZ 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.U8H88AhJdZ 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.U8H88AhJdZ' 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=544362 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 544362 /var/tmp/bdevperf.sock 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 544362 ']' 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.153 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.154 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.154 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.154 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.154 [2024-07-25 09:33:55.863583] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:23.154 [2024-07-25 09:33:55.863660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544362 ] 00:19:23.411 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.411 [2024-07-25 09:33:55.922078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.411 [2024-07-25 09:33:56.029294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.411 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.668 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.668 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.U8H88AhJdZ 00:19:23.925 [2024-07-25 09:33:56.415997] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.925 [2024-07-25 09:33:56.416120] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:23.925 [2024-07-25 09:33:56.422822] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:23.925 [2024-07-25 09:33:56.422852] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:23.925 [2024-07-25 09:33:56.422907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.925 [2024-07-25 09:33:56.423966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd5f90 (107): Transport endpoint is not connected 00:19:23.925 [2024-07-25 09:33:56.424956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd5f90 (9): Bad file descriptor 00:19:23.925 [2024-07-25 09:33:56.425955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.925 [2024-07-25 09:33:56.425973] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.925 [2024-07-25 09:33:56.426004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.925 request: 00:19:23.925 { 00:19:23.925 "name": "TLSTEST", 00:19:23.925 "trtype": "tcp", 00:19:23.925 "traddr": "10.0.0.2", 00:19:23.925 "adrfam": "ipv4", 00:19:23.925 "trsvcid": "4420", 00:19:23.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.925 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:23.925 "prchk_reftag": false, 00:19:23.925 "prchk_guard": false, 00:19:23.925 "hdgst": false, 00:19:23.925 "ddgst": false, 00:19:23.925 "psk": "/tmp/tmp.U8H88AhJdZ", 00:19:23.925 "method": "bdev_nvme_attach_controller", 00:19:23.925 "req_id": 1 00:19:23.925 } 00:19:23.925 Got JSON-RPC error response 00:19:23.925 response: 00:19:23.925 { 00:19:23.925 "code": -5, 00:19:23.925 "message": "Input/output error" 00:19:23.925 } 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 544362 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 544362 ']' 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 544362 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544362 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544362' 00:19:23.925 killing process with pid 544362 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 544362 00:19:23.925 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.925 00:19:23.925 Latency(us) 00:19:23.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.925 =================================================================================================================== 00:19:23.925 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.925 [2024-07-25 09:33:56.477670] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.925 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 544362 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.U8H88AhJdZ 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.U8H88AhJdZ 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.U8H88AhJdZ 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.U8H88AhJdZ' 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=544498 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 544498 /var/tmp/bdevperf.sock 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 544498 ']' 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.183 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.183 [2024-07-25 09:33:56.777841] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:24.183 [2024-07-25 09:33:56.777923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544498 ] 00:19:24.183 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.183 [2024-07-25 09:33:56.834502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.441 [2024-07-25 09:33:56.937365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.441 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.441 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:24.441 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.U8H88AhJdZ 00:19:24.698 [2024-07-25 09:33:57.319988] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.698 [2024-07-25 09:33:57.320111] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:24.698 [2024-07-25 09:33:57.325139] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:24.698 [2024-07-25 09:33:57.325168] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:24.698 [2024-07-25 09:33:57.325223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.698 [2024-07-25 09:33:57.325812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1768f90 (107): Transport endpoint is not connected 00:19:24.698 [2024-07-25 09:33:57.326800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1768f90 (9): Bad file descriptor 00:19:24.698 [2024-07-25 09:33:57.327798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:24.698 [2024-07-25 09:33:57.327817] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.698 [2024-07-25 09:33:57.327847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:24.698 request: 00:19:24.698 { 00:19:24.698 "name": "TLSTEST", 00:19:24.698 "trtype": "tcp", 00:19:24.698 "traddr": "10.0.0.2", 00:19:24.698 "adrfam": "ipv4", 00:19:24.698 "trsvcid": "4420", 00:19:24.698 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:24.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.698 "prchk_reftag": false, 00:19:24.698 "prchk_guard": false, 00:19:24.698 "hdgst": false, 00:19:24.698 "ddgst": false, 00:19:24.698 "psk": "/tmp/tmp.U8H88AhJdZ", 00:19:24.698 "method": "bdev_nvme_attach_controller", 00:19:24.698 "req_id": 1 00:19:24.698 } 00:19:24.698 Got JSON-RPC error response 00:19:24.698 response: 00:19:24.698 { 00:19:24.698 "code": -5, 00:19:24.698 "message": "Input/output error" 00:19:24.698 } 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 544498 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 544498 ']' 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 544498 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544498 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544498' 00:19:24.698 killing process with pid 544498 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 544498 00:19:24.698 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.698 00:19:24.698 Latency(us) 00:19:24.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.698 =================================================================================================================== 00:19:24.698 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.698 [2024-07-25 09:33:57.375338] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:24.698 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 544498 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=544632 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 544632 /var/tmp/bdevperf.sock 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 544632 ']' 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.956 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.956 [2024-07-25 09:33:57.656922] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:24.956 [2024-07-25 09:33:57.657010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544632 ] 00:19:24.956 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.214 [2024-07-25 09:33:57.717467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.214 [2024-07-25 09:33:57.826095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.214 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.214 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:25.214 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:25.471 [2024-07-25 09:33:58.147373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.471 [2024-07-25 09:33:58.149582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff8770 (9): Bad file descriptor 00:19:25.471 [2024-07-25 09:33:58.150578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.471 [2024-07-25 09:33:58.150597] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.471 [2024-07-25 09:33:58.150628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.471 request: 00:19:25.471 { 00:19:25.471 "name": "TLSTEST", 00:19:25.471 "trtype": "tcp", 00:19:25.471 "traddr": "10.0.0.2", 00:19:25.471 "adrfam": "ipv4", 00:19:25.471 "trsvcid": "4420", 00:19:25.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.471 "prchk_reftag": false, 00:19:25.471 "prchk_guard": false, 00:19:25.471 "hdgst": false, 00:19:25.471 "ddgst": false, 00:19:25.471 "method": "bdev_nvme_attach_controller", 00:19:25.471 "req_id": 1 00:19:25.471 } 00:19:25.471 Got JSON-RPC error response 00:19:25.471 response: 00:19:25.471 { 00:19:25.471 "code": -5, 00:19:25.471 "message": "Input/output error" 00:19:25.471 } 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 544632 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 544632 ']' 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 544632 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544632 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544632' 00:19:25.471 killing process with pid 544632 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 544632 00:19:25.471 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.471 00:19:25.471 Latency(us) 00:19:25.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.471 =================================================================================================================== 00:19:25.471 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.471 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 544632 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 541132 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 541132 ']' 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 541132 00:19:25.729 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 541132 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 541132' 00:19:25.987 killing process with pid 541132 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 541132 00:19:25.987 [2024-07-25 09:33:58.488623] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:25.987 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 541132 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ZEzMqqkb86 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ZEzMqqkb86 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=544782 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 544782 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 544782 ']' 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.245 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.245 [2024-07-25 09:33:58.879679] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:26.245 [2024-07-25 09:33:58.879773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.245 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.245 [2024-07-25 09:33:58.943313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.503 [2024-07-25 09:33:59.057203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.503 [2024-07-25 09:33:59.057256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.503 [2024-07-25 09:33:59.057285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.503 [2024-07-25 09:33:59.057297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.504 [2024-07-25 09:33:59.057307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.504 [2024-07-25 09:33:59.057335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ZEzMqqkb86 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZEzMqqkb86 00:19:26.504 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:26.760 [2024-07-25 09:33:59.473566] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.760 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.324 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.324 [2024-07-25 09:34:00.047130] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.324 [2024-07-25 09:34:00.047411] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.582 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.582 malloc0 00:19:27.582 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.840 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:19:28.098 [2024-07-25 09:34:00.764696] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZEzMqqkb86 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZEzMqqkb86' 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=544951 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 544951 /var/tmp/bdevperf.sock 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 544951 ']' 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.098 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.098 [2024-07-25 09:34:00.830368] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:28.098 [2024-07-25 09:34:00.830449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544951 ] 00:19:28.357 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.357 [2024-07-25 09:34:00.893673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.357 [2024-07-25 09:34:01.003163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.615 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.615 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.615 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:19:28.615 [2024-07-25 09:34:01.345075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.615 [2024-07-25 09:34:01.345199] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:28.873 TLSTESTn1 00:19:28.873 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:28.873 Running I/O for 10 seconds... 00:19:38.942 00:19:38.942 Latency(us) 00:19:38.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.942 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.942 Verification LBA range: start 0x0 length 0x2000 00:19:38.942 TLSTESTn1 : 10.02 3350.78 13.09 0.00 0.00 38142.66 6456.51 100197.26 00:19:38.942 =================================================================================================================== 00:19:38.942 Total : 3350.78 13.09 0.00 0.00 38142.66 6456.51 100197.26 00:19:38.942 0 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 544951 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 544951 ']' 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 544951 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544951 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544951' 00:19:38.942 killing process with pid 544951 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 544951 00:19:38.942 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.942 00:19:38.942 Latency(us) 00:19:38.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.942 =================================================================================================================== 00:19:38.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.942 [2024-07-25 09:34:11.646167] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:38.942 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 544951 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ZEzMqqkb86 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZEzMqqkb86 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZEzMqqkb86 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZEzMqqkb86 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZEzMqqkb86' 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=546276 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 546276 /var/tmp/bdevperf.sock 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 546276 ']' 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.200 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.458 [2024-07-25 09:34:11.960767] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:39.458 [2024-07-25 09:34:11.960848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546276 ] 00:19:39.458 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.458 [2024-07-25 09:34:12.018342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.458 [2024-07-25 09:34:12.120331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.716 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.716 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:39.716 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:19:39.974 [2024-07-25 09:34:12.451677] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.974 [2024-07-25 09:34:12.451776] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:39.974 [2024-07-25 09:34:12.451792] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ZEzMqqkb86 00:19:39.974 request: 00:19:39.974 { 00:19:39.974 "name": "TLSTEST", 00:19:39.974 "trtype": "tcp", 00:19:39.974 "traddr": "10.0.0.2", 00:19:39.974 "adrfam": "ipv4", 00:19:39.974 "trsvcid": "4420", 00:19:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.974 "prchk_reftag": false, 00:19:39.974 "prchk_guard": false, 00:19:39.974 "hdgst": false, 00:19:39.974 "ddgst": false, 00:19:39.974 "psk": "/tmp/tmp.ZEzMqqkb86", 00:19:39.974 "method": "bdev_nvme_attach_controller", 00:19:39.974 "req_id": 1 00:19:39.974 } 00:19:39.974 Got JSON-RPC error response 00:19:39.974 response: 00:19:39.974 { 00:19:39.975 "code": -1, 00:19:39.975 "message": "Operation not permitted" 00:19:39.975 } 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 546276 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 546276 ']' 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 546276 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 546276 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 546276' 00:19:39.975 killing process with pid 546276 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 546276 00:19:39.975 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.975 00:19:39.975 Latency(us) 00:19:39.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.975 =================================================================================================================== 00:19:39.975 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.975 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 546276 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 544782 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 544782 ']' 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 544782 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544782 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544782' 00:19:40.232 killing process with pid 544782 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 544782 00:19:40.232 [2024-07-25 09:34:12.786756] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:40.232 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 544782 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=546420 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 546420 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 546420 ']' 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.491 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.491 [2024-07-25 09:34:13.116654] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:40.491 [2024-07-25 09:34:13.116749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.491 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.491 [2024-07-25 09:34:13.177594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.752 [2024-07-25 09:34:13.284310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.752 [2024-07-25 09:34:13.284392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.752 [2024-07-25 09:34:13.284408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.752 [2024-07-25 09:34:13.284420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.752 [2024-07-25 09:34:13.284443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.753 [2024-07-25 09:34:13.284471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ZEzMqqkb86 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZEzMqqkb86 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.753 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ZEzMqqkb86 00:19:40.754 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZEzMqqkb86 00:19:40.754 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.017 [2024-07-25 09:34:13.694180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.017 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.274 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.532 [2024-07-25 09:34:14.207545] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.532 [2024-07-25 09:34:14.207769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.532 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.098 malloc0 00:19:42.098 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.098 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:19:42.356 [2024-07-25 09:34:15.005551] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:42.356 [2024-07-25 09:34:15.005588] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:42.356 [2024-07-25 09:34:15.005624] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:42.356 request: 00:19:42.356 { 00:19:42.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.356 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.356 "psk": "/tmp/tmp.ZEzMqqkb86", 00:19:42.356 "method": "nvmf_subsystem_add_host", 00:19:42.356 "req_id": 1 00:19:42.356 } 00:19:42.356 Got JSON-RPC error response 00:19:42.356 response: 00:19:42.356 { 00:19:42.356 "code": -32603, 00:19:42.356 "message": "Internal error" 00:19:42.356 } 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 546420 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 546420 ']' 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 546420 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 546420 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 546420' 00:19:42.356 killing process with pid 546420 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 546420 00:19:42.356 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 546420 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ZEzMqqkb86 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=546714 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 546714 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 546714 ']' 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.614 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.872 [2024-07-25 09:34:15.393672] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:42.872 [2024-07-25 09:34:15.393775] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.872 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.872 [2024-07-25 09:34:15.463525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.872 [2024-07-25 09:34:15.572456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.872 [2024-07-25 09:34:15.572508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.872 [2024-07-25 09:34:15.572536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.872 [2024-07-25 09:34:15.572548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.872 [2024-07-25 09:34:15.572557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.872 [2024-07-25 09:34:15.572583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ZEzMqqkb86 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZEzMqqkb86 00:19:43.130 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.388 [2024-07-25 09:34:15.937174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.388 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.645 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.902 [2024-07-25 09:34:16.426495] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.902 [2024-07-25 09:34:16.426735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.902 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.160 malloc0 00:19:44.160 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.418 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:19:44.418 [2024-07-25 09:34:17.151066] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=546994 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 546994 /var/tmp/bdevperf.sock 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 546994 ']' 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.676 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.676 [2024-07-25 09:34:17.212537] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:44.676 [2024-07-25 09:34:17.212610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546994 ] 00:19:44.676 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.676 [2024-07-25 09:34:17.269294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.676 [2024-07-25 09:34:17.374593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.934 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.934 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:44.934 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:19:45.192 [2024-07-25 09:34:17.707182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.192 [2024-07-25 09:34:17.707299] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:45.192 TLSTESTn1 00:19:45.192 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:45.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:45.450 "subsystems": [ 00:19:45.450 { 00:19:45.450 "subsystem": "keyring", 00:19:45.450 "config": [] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "iobuf", 00:19:45.450 "config": [ 00:19:45.450 { 00:19:45.450 "method": "iobuf_set_options", 00:19:45.450 "params": { 00:19:45.450 "small_pool_count": 8192, 00:19:45.450 "large_pool_count": 1024, 00:19:45.450 "small_bufsize": 8192, 00:19:45.450 "large_bufsize": 135168 00:19:45.450 } 00:19:45.450 } 00:19:45.450 ] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "sock", 00:19:45.450 "config": [ 00:19:45.450 { 00:19:45.450 "method": "sock_set_default_impl", 00:19:45.450 "params": { 00:19:45.450 "impl_name": "posix" 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "sock_impl_set_options", 00:19:45.450 "params": { 00:19:45.450 "impl_name": "ssl", 00:19:45.450 "recv_buf_size": 4096, 00:19:45.450 "send_buf_size": 4096, 00:19:45.450 "enable_recv_pipe": true, 00:19:45.450 "enable_quickack": false, 00:19:45.450 "enable_placement_id": 0, 00:19:45.450 "enable_zerocopy_send_server": true, 00:19:45.450 "enable_zerocopy_send_client": false, 00:19:45.450 "zerocopy_threshold": 0, 00:19:45.450 "tls_version": 0, 00:19:45.450 "enable_ktls": false 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "sock_impl_set_options", 00:19:45.450 "params": { 00:19:45.450 "impl_name": "posix", 00:19:45.450 "recv_buf_size": 2097152, 00:19:45.450 "send_buf_size": 2097152, 00:19:45.450 "enable_recv_pipe": true, 00:19:45.450 "enable_quickack": false, 00:19:45.450 "enable_placement_id": 0, 00:19:45.450 "enable_zerocopy_send_server": true, 00:19:45.450 "enable_zerocopy_send_client": false, 00:19:45.450 "zerocopy_threshold": 0, 00:19:45.450 "tls_version": 0, 00:19:45.450 "enable_ktls": false 00:19:45.450 } 00:19:45.450 } 00:19:45.450 ] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "vmd", 00:19:45.450 "config": [] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "accel", 00:19:45.450 "config": [ 00:19:45.450 { 00:19:45.450 "method": "accel_set_options", 00:19:45.450 "params": { 00:19:45.450 "small_cache_size": 128, 00:19:45.450 "large_cache_size": 16, 00:19:45.450 "task_count": 2048, 00:19:45.450 "sequence_count": 2048, 00:19:45.450 "buf_count": 2048 00:19:45.450 } 00:19:45.450 } 00:19:45.450 ] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "bdev", 00:19:45.450 "config": [ 00:19:45.450 { 00:19:45.450 "method": "bdev_set_options", 00:19:45.450 "params": { 00:19:45.450 "bdev_io_pool_size": 65535, 00:19:45.450 "bdev_io_cache_size": 256, 00:19:45.450 "bdev_auto_examine": true, 00:19:45.450 "iobuf_small_cache_size": 128, 00:19:45.450 "iobuf_large_cache_size": 16 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "bdev_raid_set_options", 00:19:45.450 "params": { 00:19:45.450 "process_window_size_kb": 1024, 00:19:45.450 "process_max_bandwidth_mb_sec": 0 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "bdev_iscsi_set_options", 00:19:45.450 "params": { 00:19:45.450 "timeout_sec": 30 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "bdev_nvme_set_options", 00:19:45.450 "params": { 00:19:45.450 "action_on_timeout": "none", 00:19:45.450 "timeout_us": 0, 00:19:45.450 "timeout_admin_us": 0, 00:19:45.450 "keep_alive_timeout_ms": 10000, 00:19:45.450 "arbitration_burst": 0, 00:19:45.450 "low_priority_weight": 0, 00:19:45.450 "medium_priority_weight": 0, 00:19:45.450 "high_priority_weight": 0, 00:19:45.450 "nvme_adminq_poll_period_us": 10000, 00:19:45.450 "nvme_ioq_poll_period_us": 0, 00:19:45.450 "io_queue_requests": 0, 00:19:45.450 "delay_cmd_submit": true, 00:19:45.450 "transport_retry_count": 4, 00:19:45.450 "bdev_retry_count": 3, 00:19:45.450 "transport_ack_timeout": 0, 00:19:45.450 "ctrlr_loss_timeout_sec": 0, 00:19:45.450 "reconnect_delay_sec": 0, 00:19:45.450 "fast_io_fail_timeout_sec": 0, 00:19:45.450 "disable_auto_failback": false, 00:19:45.450 "generate_uuids": false, 00:19:45.450 "transport_tos": 0, 00:19:45.450 "nvme_error_stat": false, 00:19:45.450 "rdma_srq_size": 0, 00:19:45.450 "io_path_stat": false, 00:19:45.450 "allow_accel_sequence": false, 00:19:45.450 "rdma_max_cq_size": 0, 00:19:45.450 "rdma_cm_event_timeout_ms": 0, 00:19:45.450 "dhchap_digests": [ 00:19:45.450 "sha256", 00:19:45.450 "sha384", 00:19:45.450 "sha512" 00:19:45.450 ], 00:19:45.450 "dhchap_dhgroups": [ 00:19:45.450 "null", 00:19:45.450 "ffdhe2048", 00:19:45.450 "ffdhe3072", 00:19:45.450 "ffdhe4096", 00:19:45.450 "ffdhe6144", 00:19:45.450 "ffdhe8192" 00:19:45.450 ] 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "bdev_nvme_set_hotplug", 00:19:45.450 "params": { 00:19:45.450 "period_us": 100000, 00:19:45.450 "enable": false 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "bdev_malloc_create", 00:19:45.450 "params": { 00:19:45.450 "name": "malloc0", 00:19:45.450 "num_blocks": 8192, 00:19:45.450 "block_size": 4096, 00:19:45.450 "physical_block_size": 4096, 00:19:45.450 "uuid": "b7fef082-3f05-481e-a3d5-1ad4f7f4e5f2", 00:19:45.450 "optimal_io_boundary": 0, 00:19:45.450 "md_size": 0, 00:19:45.450 "dif_type": 0, 00:19:45.450 "dif_is_head_of_md": false, 00:19:45.450 "dif_pi_format": 0 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "method": "bdev_wait_for_examine" 00:19:45.450 } 00:19:45.450 ] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "nbd", 00:19:45.450 "config": [] 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "subsystem": "scheduler", 00:19:45.451 "config": [ 00:19:45.451 { 00:19:45.451 "method": "framework_set_scheduler", 00:19:45.451 "params": { 00:19:45.451 "name": "static" 00:19:45.451 } 00:19:45.451 } 00:19:45.451 ] 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "subsystem": "nvmf", 00:19:45.451 "config": [ 00:19:45.451 { 00:19:45.451 "method": "nvmf_set_config", 00:19:45.451 "params": { 00:19:45.451 "discovery_filter": "match_any", 00:19:45.451 "admin_cmd_passthru": { 00:19:45.451 "identify_ctrlr": false 00:19:45.451 } 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_set_max_subsystems", 00:19:45.451 "params": { 00:19:45.451 "max_subsystems": 1024 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_set_crdt", 00:19:45.451 "params": { 00:19:45.451 "crdt1": 0, 00:19:45.451 "crdt2": 0, 00:19:45.451 "crdt3": 0 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_create_transport", 00:19:45.451 "params": { 00:19:45.451 "trtype": "TCP", 00:19:45.451 "max_queue_depth": 128, 00:19:45.451 "max_io_qpairs_per_ctrlr": 127, 00:19:45.451 "in_capsule_data_size": 4096, 00:19:45.451 "max_io_size": 131072, 00:19:45.451 "io_unit_size": 131072, 00:19:45.451 "max_aq_depth": 128, 00:19:45.451 "num_shared_buffers": 511, 00:19:45.451 "buf_cache_size": 4294967295, 00:19:45.451 "dif_insert_or_strip": false, 00:19:45.451 "zcopy": false, 00:19:45.451 "c2h_success": false, 00:19:45.451 "sock_priority": 0, 00:19:45.451 "abort_timeout_sec": 1, 00:19:45.451 "ack_timeout": 0, 00:19:45.451 "data_wr_pool_size": 0 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_create_subsystem", 00:19:45.451 "params": { 00:19:45.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.451 "allow_any_host": false, 00:19:45.451 "serial_number": "SPDK00000000000001", 00:19:45.451 "model_number": "SPDK bdev Controller", 00:19:45.451 "max_namespaces": 10, 00:19:45.451 "min_cntlid": 1, 00:19:45.451 "max_cntlid": 65519, 00:19:45.451 "ana_reporting": false 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_subsystem_add_host", 00:19:45.451 "params": { 00:19:45.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.451 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.451 "psk": "/tmp/tmp.ZEzMqqkb86" 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_subsystem_add_ns", 00:19:45.451 "params": { 00:19:45.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.451 "namespace": { 00:19:45.451 "nsid": 1, 00:19:45.451 "bdev_name": "malloc0", 00:19:45.451 "nguid": "B7FEF0823F05481EA3D51AD4F7F4E5F2", 00:19:45.451 "uuid": "b7fef082-3f05-481e-a3d5-1ad4f7f4e5f2", 00:19:45.451 "no_auto_visible": false 00:19:45.451 } 00:19:45.451 } 00:19:45.451 }, 00:19:45.451 { 00:19:45.451 "method": "nvmf_subsystem_add_listener", 00:19:45.451 "params": { 00:19:45.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.451 "listen_address": { 00:19:45.451 "trtype": "TCP", 00:19:45.451 "adrfam": "IPv4", 00:19:45.451 "traddr": "10.0.0.2", 00:19:45.451 "trsvcid": "4420" 00:19:45.451 }, 00:19:45.451 "secure_channel": true 00:19:45.451 } 00:19:45.451 } 00:19:45.451 ] 00:19:45.451 } 00:19:45.451 ] 00:19:45.451 }' 00:19:45.451 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:45.708 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:45.708 "subsystems": [ 00:19:45.708 { 00:19:45.708 "subsystem": "keyring", 00:19:45.708 "config": [] 00:19:45.708 }, 00:19:45.708 { 00:19:45.708 "subsystem": "iobuf", 00:19:45.708 "config": [ 00:19:45.709 { 00:19:45.709 "method": "iobuf_set_options", 00:19:45.709 "params": { 00:19:45.709 "small_pool_count": 8192, 00:19:45.709 "large_pool_count": 1024, 00:19:45.709 "small_bufsize": 8192, 00:19:45.709 "large_bufsize": 135168 00:19:45.709 } 00:19:45.709 } 00:19:45.709 ] 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "subsystem": "sock", 00:19:45.709 "config": [ 00:19:45.709 { 00:19:45.709 "method": "sock_set_default_impl", 00:19:45.709 "params": { 00:19:45.709 "impl_name": "posix" 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "sock_impl_set_options", 00:19:45.709 "params": { 00:19:45.709 "impl_name": "ssl", 00:19:45.709 "recv_buf_size": 4096, 00:19:45.709 "send_buf_size": 4096, 00:19:45.709 "enable_recv_pipe": true, 00:19:45.709 "enable_quickack": false, 00:19:45.709 "enable_placement_id": 0, 00:19:45.709 "enable_zerocopy_send_server": true, 00:19:45.709 "enable_zerocopy_send_client": false, 00:19:45.709 "zerocopy_threshold": 0, 00:19:45.709 "tls_version": 0, 00:19:45.709 "enable_ktls": false 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "sock_impl_set_options", 00:19:45.709 "params": { 00:19:45.709 "impl_name": "posix", 00:19:45.709 "recv_buf_size": 2097152, 00:19:45.709 "send_buf_size": 2097152, 00:19:45.709 "enable_recv_pipe": true, 00:19:45.709 "enable_quickack": false, 00:19:45.709 "enable_placement_id": 0, 00:19:45.709 "enable_zerocopy_send_server": true, 00:19:45.709 "enable_zerocopy_send_client": false, 00:19:45.709 "zerocopy_threshold": 0, 00:19:45.709 "tls_version": 0, 00:19:45.709 "enable_ktls": false 00:19:45.709 } 00:19:45.709 } 00:19:45.709 ] 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "subsystem": "vmd", 00:19:45.709 "config": [] 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "subsystem": "accel", 00:19:45.709 "config": [ 00:19:45.709 { 00:19:45.709 "method": "accel_set_options", 00:19:45.709 "params": { 00:19:45.709 "small_cache_size": 128, 00:19:45.709 "large_cache_size": 16, 00:19:45.709 "task_count": 2048, 00:19:45.709 "sequence_count": 2048, 00:19:45.709 "buf_count": 2048 00:19:45.709 } 00:19:45.709 } 00:19:45.709 ] 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "subsystem": "bdev", 00:19:45.709 "config": [ 00:19:45.709 { 00:19:45.709 "method": "bdev_set_options", 00:19:45.709 "params": { 00:19:45.709 "bdev_io_pool_size": 65535, 00:19:45.709 "bdev_io_cache_size": 256, 00:19:45.709 "bdev_auto_examine": true, 00:19:45.709 "iobuf_small_cache_size": 128, 00:19:45.709 "iobuf_large_cache_size": 16 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "bdev_raid_set_options", 00:19:45.709 "params": { 00:19:45.709 "process_window_size_kb": 1024, 00:19:45.709 "process_max_bandwidth_mb_sec": 0 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "bdev_iscsi_set_options", 00:19:45.709 "params": { 00:19:45.709 "timeout_sec": 30 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "bdev_nvme_set_options", 00:19:45.709 "params": { 00:19:45.709 "action_on_timeout": "none", 00:19:45.709 "timeout_us": 0, 00:19:45.709 "timeout_admin_us": 0, 00:19:45.709 "keep_alive_timeout_ms": 10000, 00:19:45.709 "arbitration_burst": 0, 00:19:45.709 "low_priority_weight": 0, 00:19:45.709 "medium_priority_weight": 0, 00:19:45.709 "high_priority_weight": 0, 00:19:45.709 "nvme_adminq_poll_period_us": 10000, 00:19:45.709 "nvme_ioq_poll_period_us": 0, 00:19:45.709 "io_queue_requests": 512, 00:19:45.709 "delay_cmd_submit": true, 00:19:45.709 "transport_retry_count": 4, 00:19:45.709 "bdev_retry_count": 3, 00:19:45.709 "transport_ack_timeout": 0, 00:19:45.709 "ctrlr_loss_timeout_sec": 0, 00:19:45.709 "reconnect_delay_sec": 0, 00:19:45.709 "fast_io_fail_timeout_sec": 0, 00:19:45.709 "disable_auto_failback": false, 00:19:45.709 "generate_uuids": false, 00:19:45.709 "transport_tos": 0, 00:19:45.709 "nvme_error_stat": false, 00:19:45.709 "rdma_srq_size": 0, 00:19:45.709 "io_path_stat": false, 00:19:45.709 "allow_accel_sequence": false, 00:19:45.709 "rdma_max_cq_size": 0, 00:19:45.709 "rdma_cm_event_timeout_ms": 0, 00:19:45.709 "dhchap_digests": [ 00:19:45.709 "sha256", 00:19:45.709 "sha384", 00:19:45.709 "sha512" 00:19:45.709 ], 00:19:45.709 "dhchap_dhgroups": [ 00:19:45.709 "null", 00:19:45.709 "ffdhe2048", 00:19:45.709 "ffdhe3072", 00:19:45.709 "ffdhe4096", 00:19:45.709 "ffdhe6144", 00:19:45.709 "ffdhe8192" 00:19:45.709 ] 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "bdev_nvme_attach_controller", 00:19:45.709 "params": { 00:19:45.709 "name": "TLSTEST", 00:19:45.709 "trtype": "TCP", 00:19:45.709 "adrfam": "IPv4", 00:19:45.709 "traddr": "10.0.0.2", 00:19:45.709 "trsvcid": "4420", 00:19:45.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.709 "prchk_reftag": false, 00:19:45.709 "prchk_guard": false, 00:19:45.709 "ctrlr_loss_timeout_sec": 0, 00:19:45.709 "reconnect_delay_sec": 0, 00:19:45.709 "fast_io_fail_timeout_sec": 0, 00:19:45.709 "psk": "/tmp/tmp.ZEzMqqkb86", 00:19:45.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.709 "hdgst": false, 00:19:45.709 "ddgst": false 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "bdev_nvme_set_hotplug", 00:19:45.709 "params": { 00:19:45.709 "period_us": 100000, 00:19:45.709 "enable": false 00:19:45.709 } 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "method": "bdev_wait_for_examine" 00:19:45.709 } 00:19:45.709 ] 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "subsystem": "nbd", 00:19:45.709 "config": [] 00:19:45.709 } 00:19:45.709 ] 00:19:45.709 }' 00:19:45.709 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 546994 00:19:45.709 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 546994 ']' 00:19:45.709 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 546994 00:19:45.709 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.709 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.709 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 546994 00:19:45.967 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:45.967 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:45.967 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 546994' 00:19:45.967 killing process with pid 546994 00:19:45.967 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 546994 00:19:45.967 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.967 00:19:45.967 Latency(us) 00:19:45.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.967 =================================================================================================================== 00:19:45.967 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:45.967 [2024-07-25 09:34:18.466476] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:45.967 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 546994 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 546714 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 546714 ']' 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 546714 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 546714 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 546714' 00:19:46.225 killing process with pid 546714 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 546714 00:19:46.225 [2024-07-25 09:34:18.755687] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:46.225 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 546714 00:19:46.483 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:46.483 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.483 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.483 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:46.483 "subsystems": [ 00:19:46.483 { 00:19:46.484 "subsystem": "keyring", 00:19:46.484 "config": [] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "iobuf", 00:19:46.484 "config": [ 00:19:46.484 { 00:19:46.484 "method": "iobuf_set_options", 00:19:46.484 "params": { 00:19:46.484 "small_pool_count": 8192, 00:19:46.484 "large_pool_count": 1024, 00:19:46.484 "small_bufsize": 8192, 00:19:46.484 "large_bufsize": 135168 00:19:46.484 } 00:19:46.484 } 00:19:46.484 ] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "sock", 00:19:46.484 "config": [ 00:19:46.484 { 00:19:46.484 "method": "sock_set_default_impl", 00:19:46.484 "params": { 00:19:46.484 "impl_name": "posix" 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "sock_impl_set_options", 00:19:46.484 "params": { 00:19:46.484 "impl_name": "ssl", 00:19:46.484 "recv_buf_size": 4096, 00:19:46.484 "send_buf_size": 4096, 00:19:46.484 "enable_recv_pipe": true, 00:19:46.484 "enable_quickack": false, 00:19:46.484 "enable_placement_id": 0, 00:19:46.484 "enable_zerocopy_send_server": true, 00:19:46.484 "enable_zerocopy_send_client": false, 00:19:46.484 "zerocopy_threshold": 0, 00:19:46.484 "tls_version": 0, 00:19:46.484 "enable_ktls": false 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "sock_impl_set_options", 00:19:46.484 "params": { 00:19:46.484 "impl_name": "posix", 00:19:46.484 "recv_buf_size": 2097152, 00:19:46.484 "send_buf_size": 2097152, 00:19:46.484 "enable_recv_pipe": true, 00:19:46.484 "enable_quickack": false, 00:19:46.484 "enable_placement_id": 0, 00:19:46.484 "enable_zerocopy_send_server": true, 00:19:46.484 "enable_zerocopy_send_client": false, 00:19:46.484 "zerocopy_threshold": 0, 00:19:46.484 "tls_version": 0, 00:19:46.484 "enable_ktls": false 00:19:46.484 } 00:19:46.484 } 00:19:46.484 ] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "vmd", 00:19:46.484 "config": [] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "accel", 00:19:46.484 "config": [ 00:19:46.484 { 00:19:46.484 "method": "accel_set_options", 00:19:46.484 "params": { 00:19:46.484 "small_cache_size": 128, 00:19:46.484 "large_cache_size": 16, 00:19:46.484 "task_count": 2048, 00:19:46.484 "sequence_count": 2048, 00:19:46.484 "buf_count": 2048 00:19:46.484 } 00:19:46.484 } 00:19:46.484 ] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "bdev", 00:19:46.484 "config": [ 00:19:46.484 { 00:19:46.484 "method": "bdev_set_options", 00:19:46.484 "params": { 00:19:46.484 "bdev_io_pool_size": 65535, 00:19:46.484 "bdev_io_cache_size": 256, 00:19:46.484 "bdev_auto_examine": true, 00:19:46.484 "iobuf_small_cache_size": 128, 00:19:46.484 "iobuf_large_cache_size": 16 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "bdev_raid_set_options", 00:19:46.484 "params": { 00:19:46.484 "process_window_size_kb": 1024, 00:19:46.484 "process_max_bandwidth_mb_sec": 0 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "bdev_iscsi_set_options", 00:19:46.484 "params": { 00:19:46.484 "timeout_sec": 30 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "bdev_nvme_set_options", 00:19:46.484 "params": { 00:19:46.484 "action_on_timeout": "none", 00:19:46.484 "timeout_us": 0, 00:19:46.484 "timeout_admin_us": 0, 00:19:46.484 "keep_alive_timeout_ms": 10000, 00:19:46.484 "arbitration_burst": 0, 00:19:46.484 "low_priority_weight": 0, 00:19:46.484 "medium_priority_weight": 0, 00:19:46.484 "high_priority_weight": 0, 00:19:46.484 "nvme_adminq_poll_period_us": 10000, 00:19:46.484 "nvme_ioq_poll_period_us": 0, 00:19:46.484 "io_queue_requests": 0, 00:19:46.484 "delay_cmd_submit": true, 00:19:46.484 "transport_retry_count": 4, 00:19:46.484 "bdev_retry_count": 3, 00:19:46.484 "transport_ack_timeout": 0, 00:19:46.484 "ctrlr_loss_timeout_sec": 0, 00:19:46.484 "reconnect_delay_sec": 0, 00:19:46.484 "fast_io_fail_timeout_sec": 0, 00:19:46.484 "disable_auto_failback": false, 00:19:46.484 "generate_uuids": false, 00:19:46.484 "transport_tos": 0, 00:19:46.484 "nvme_error_stat": false, 00:19:46.484 "rdma_srq_size": 0, 00:19:46.484 "io_path_stat": false, 00:19:46.484 "allow_accel_sequence": false, 00:19:46.484 "rdma_max_cq_size": 0, 00:19:46.484 "rdma_cm_event_timeout_ms": 0, 00:19:46.484 "dhchap_digests": [ 00:19:46.484 "sha256", 00:19:46.484 "sha384", 00:19:46.484 "sha512" 00:19:46.484 ], 00:19:46.484 "dhchap_dhgroups": [ 00:19:46.484 "null", 00:19:46.484 "ffdhe2048", 00:19:46.484 "ffdhe3072", 00:19:46.484 "ffdhe4096", 00:19:46.484 "ffdhe6144", 00:19:46.484 "ffdhe8192" 00:19:46.484 ] 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "bdev_nvme_set_hotplug", 00:19:46.484 "params": { 00:19:46.484 "period_us": 100000, 00:19:46.484 "enable": false 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "bdev_malloc_create", 00:19:46.484 "params": { 00:19:46.484 "name": "malloc0", 00:19:46.484 "num_blocks": 8192, 00:19:46.484 "block_size": 4096, 00:19:46.484 "physical_block_size": 4096, 00:19:46.484 "uuid": "b7fef082-3f05-481e-a3d5-1ad4f7f4e5f2", 00:19:46.484 "optimal_io_boundary": 0, 00:19:46.484 "md_size": 0, 00:19:46.484 "dif_type": 0, 00:19:46.484 "dif_is_head_of_md": false, 00:19:46.484 "dif_pi_format": 0 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "bdev_wait_for_examine" 00:19:46.484 } 00:19:46.484 ] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "nbd", 00:19:46.484 "config": [] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "scheduler", 00:19:46.484 "config": [ 00:19:46.484 { 00:19:46.484 "method": "framework_set_scheduler", 00:19:46.484 "params": { 00:19:46.484 "name": "static" 00:19:46.484 } 00:19:46.484 } 00:19:46.484 ] 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "subsystem": "nvmf", 00:19:46.484 "config": [ 00:19:46.484 { 00:19:46.484 "method": "nvmf_set_config", 00:19:46.484 "params": { 00:19:46.484 "discovery_filter": "match_any", 00:19:46.484 "admin_cmd_passthru": { 00:19:46.484 "identify_ctrlr": false 00:19:46.484 } 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "nvmf_set_max_subsystems", 00:19:46.484 "params": { 00:19:46.484 "max_subsystems": 1024 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "nvmf_set_crdt", 00:19:46.484 "params": { 00:19:46.484 "crdt1": 0, 00:19:46.484 "crdt2": 0, 00:19:46.484 "crdt3": 0 00:19:46.484 } 00:19:46.484 }, 00:19:46.484 { 00:19:46.484 "method": "nvmf_create_transport", 00:19:46.484 "params": { 00:19:46.484 "trtype": "TCP", 00:19:46.484 "max_queue_depth": 128, 00:19:46.484 "max_io_qpairs_per_ctrlr": 127, 00:19:46.484 "in_capsule_data_size": 4096, 00:19:46.484 "max_io_size": 131072, 00:19:46.484 "io_unit_size": 131072, 00:19:46.484 "max_aq_depth": 128, 00:19:46.484 "num_shared_buffers": 511, 00:19:46.484 "buf_cache_size": 4294967295, 00:19:46.484 "dif_insert_or_strip": false, 00:19:46.484 "zcopy": false, 00:19:46.484 "c2h_success": false, 00:19:46.484 "sock_priority": 0, 00:19:46.484 "abort_timeout_sec": 1, 00:19:46.484 "ack_timeout": 0, 00:19:46.484 "data_wr_pool_size": 0 00:19:46.484 } 00:19:46.484 }, 00:19:46.485 { 00:19:46.485 "method": "nvmf_create_subsystem", 00:19:46.485 "params": { 00:19:46.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.485 "allow_any_host": false, 00:19:46.485 "serial_number": "SPDK00000000000001", 00:19:46.485 "model_number": "SPDK bdev Controller", 00:19:46.485 "max_namespaces": 10, 00:19:46.485 "min_cntlid": 1, 00:19:46.485 "max_cntlid": 65519, 00:19:46.485 "ana_reporting": false 00:19:46.485 } 00:19:46.485 }, 00:19:46.485 { 00:19:46.485 "method": "nvmf_subsystem_add_host", 00:19:46.485 "params": { 00:19:46.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.485 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.485 "psk": "/tmp/tmp.ZEzMqqkb86" 00:19:46.485 } 00:19:46.485 }, 00:19:46.485 { 00:19:46.485 "method": "nvmf_subsystem_add_ns", 00:19:46.485 "params": { 00:19:46.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.485 "namespace": { 00:19:46.485 "nsid": 1, 00:19:46.485 "bdev_name": "malloc0", 00:19:46.485 "nguid": "B7FEF0823F05481EA3D51AD4F7F4E5F2", 00:19:46.485 "uuid": "b7fef082-3f05-481e-a3d5-1ad4f7f4e5f2", 00:19:46.485 "no_auto_visible": false 00:19:46.485 } 00:19:46.485 } 00:19:46.485 }, 00:19:46.485 { 00:19:46.485 "method": "nvmf_subsystem_add_listener", 00:19:46.485 "params": { 00:19:46.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.485 "listen_address": { 00:19:46.485 "trtype": "TCP", 00:19:46.485 "adrfam": "IPv4", 00:19:46.485 "traddr": "10.0.0.2", 00:19:46.485 "trsvcid": "4420" 00:19:46.485 }, 00:19:46.485 "secure_channel": true 00:19:46.485 } 00:19:46.485 } 00:19:46.485 ] 00:19:46.485 } 00:19:46.485 ] 00:19:46.485 }' 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=547257 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 547257 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 547257 ']' 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.485 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.485 [2024-07-25 09:34:19.098433] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:46.485 [2024-07-25 09:34:19.098527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.485 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.485 [2024-07-25 09:34:19.166325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.743 [2024-07-25 09:34:19.279829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.743 [2024-07-25 09:34:19.279890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.743 [2024-07-25 09:34:19.279907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.743 [2024-07-25 09:34:19.279921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.743 [2024-07-25 09:34:19.279932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.743 [2024-07-25 09:34:19.280024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.002 [2024-07-25 09:34:19.518122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.002 [2024-07-25 09:34:19.553774] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:47.002 [2024-07-25 09:34:19.569826] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.002 [2024-07-25 09:34:19.570061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=547326 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 547326 /var/tmp/bdevperf.sock 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 547326 ']' 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.568 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:47.568 "subsystems": [ 00:19:47.568 { 00:19:47.568 "subsystem": "keyring", 00:19:47.568 "config": [] 00:19:47.568 }, 00:19:47.568 { 00:19:47.568 "subsystem": "iobuf", 00:19:47.568 "config": [ 00:19:47.568 { 00:19:47.568 "method": "iobuf_set_options", 00:19:47.569 "params": { 00:19:47.569 "small_pool_count": 8192, 00:19:47.569 "large_pool_count": 1024, 00:19:47.569 "small_bufsize": 8192, 00:19:47.569 "large_bufsize": 135168 00:19:47.569 } 00:19:47.569 } 00:19:47.569 ] 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "subsystem": "sock", 00:19:47.569 "config": [ 00:19:47.569 { 00:19:47.569 "method": "sock_set_default_impl", 00:19:47.569 "params": { 00:19:47.569 "impl_name": "posix" 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "sock_impl_set_options", 00:19:47.569 "params": { 00:19:47.569 "impl_name": "ssl", 00:19:47.569 "recv_buf_size": 4096, 00:19:47.569 "send_buf_size": 4096, 00:19:47.569 "enable_recv_pipe": true, 00:19:47.569 "enable_quickack": false, 00:19:47.569 "enable_placement_id": 0, 00:19:47.569 "enable_zerocopy_send_server": true, 00:19:47.569 "enable_zerocopy_send_client": false, 00:19:47.569 "zerocopy_threshold": 0, 00:19:47.569 "tls_version": 0, 00:19:47.569 "enable_ktls": false 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "sock_impl_set_options", 00:19:47.569 "params": { 00:19:47.569 "impl_name": "posix", 00:19:47.569 "recv_buf_size": 2097152, 00:19:47.569 "send_buf_size": 2097152, 00:19:47.569 "enable_recv_pipe": true, 00:19:47.569 "enable_quickack": false, 00:19:47.569 "enable_placement_id": 0, 00:19:47.569 "enable_zerocopy_send_server": true, 00:19:47.569 "enable_zerocopy_send_client": false, 00:19:47.569 "zerocopy_threshold": 0, 00:19:47.569 "tls_version": 0, 00:19:47.569 "enable_ktls": false 00:19:47.569 } 00:19:47.569 } 00:19:47.569 ] 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "subsystem": "vmd", 00:19:47.569 "config": [] 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "subsystem": "accel", 00:19:47.569 "config": [ 00:19:47.569 { 00:19:47.569 "method": "accel_set_options", 00:19:47.569 "params": { 00:19:47.569 "small_cache_size": 128, 00:19:47.569 "large_cache_size": 16, 00:19:47.569 "task_count": 2048, 00:19:47.569 "sequence_count": 2048, 00:19:47.569 "buf_count": 2048 00:19:47.569 } 00:19:47.569 } 00:19:47.569 ] 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "subsystem": "bdev", 00:19:47.569 "config": [ 00:19:47.569 { 00:19:47.569 "method": "bdev_set_options", 00:19:47.569 "params": { 00:19:47.569 "bdev_io_pool_size": 65535, 00:19:47.569 "bdev_io_cache_size": 256, 00:19:47.569 "bdev_auto_examine": true, 00:19:47.569 "iobuf_small_cache_size": 128, 00:19:47.569 "iobuf_large_cache_size": 16 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "bdev_raid_set_options", 00:19:47.569 "params": { 00:19:47.569 "process_window_size_kb": 1024, 00:19:47.569 "process_max_bandwidth_mb_sec": 0 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "bdev_iscsi_set_options", 00:19:47.569 "params": { 00:19:47.569 "timeout_sec": 30 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "bdev_nvme_set_options", 00:19:47.569 "params": { 00:19:47.569 "action_on_timeout": "none", 00:19:47.569 "timeout_us": 0, 00:19:47.569 "timeout_admin_us": 0, 00:19:47.569 "keep_alive_timeout_ms": 10000, 00:19:47.569 "arbitration_burst": 0, 00:19:47.569 "low_priority_weight": 0, 00:19:47.569 "medium_priority_weight": 0, 00:19:47.569 "high_priority_weight": 0, 00:19:47.569 "nvme_adminq_poll_period_us": 10000, 00:19:47.569 "nvme_ioq_poll_period_us": 0, 00:19:47.569 "io_queue_requests": 512, 00:19:47.569 "delay_cmd_submit": true, 00:19:47.569 "transport_retry_count": 4, 00:19:47.569 "bdev_retry_count": 3, 00:19:47.569 "transport_ack_timeout": 0, 00:19:47.569 "ctrlr_loss_timeout_sec": 0, 00:19:47.569 "reconnect_delay_sec": 0, 00:19:47.569 "fast_io_fail_timeout_sec": 0, 00:19:47.569 "disable_auto_failback": false, 00:19:47.569 "generate_uuids": false, 00:19:47.569 "transport_tos": 0, 00:19:47.569 "nvme_error_stat": false, 00:19:47.569 "rdma_srq_size": 0, 00:19:47.569 "io_path_stat": false, 00:19:47.569 "allow_accel_sequence": false, 00:19:47.569 "rdma_max_cq_size": 0, 00:19:47.569 "rdma_cm_event_timeout_ms": 0, 00:19:47.569 "dhchap_digests": [ 00:19:47.569 "sha256", 00:19:47.569 "sha384", 00:19:47.569 "sha512" 00:19:47.569 ], 00:19:47.569 "dhchap_dhgroups": [ 00:19:47.569 "null", 00:19:47.569 "ffdhe2048", 00:19:47.569 "ffdhe3072", 00:19:47.569 "ffdhe4096", 00:19:47.569 "ffdhe6144", 00:19:47.569 "ffdhe8192" 00:19:47.569 ] 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "bdev_nvme_attach_controller", 00:19:47.569 "params": { 00:19:47.569 "name": "TLSTEST", 00:19:47.569 "trtype": "TCP", 00:19:47.569 "adrfam": "IPv4", 00:19:47.569 "traddr": "10.0.0.2", 00:19:47.569 "trsvcid": "4420", 00:19:47.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.569 "prchk_reftag": false, 00:19:47.569 "prchk_guard": false, 00:19:47.569 "ctrlr_loss_timeout_sec": 0, 00:19:47.569 "reconnect_delay_sec": 0, 00:19:47.569 "fast_io_fail_timeout_sec": 0, 00:19:47.569 "psk": "/tmp/tmp.ZEzMqqkb86", 00:19:47.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.569 "hdgst": false, 00:19:47.569 "ddgst": false 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "bdev_nvme_set_hotplug", 00:19:47.569 "params": { 00:19:47.569 "period_us": 100000, 00:19:47.569 "enable": false 00:19:47.569 } 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "method": "bdev_wait_for_examine" 00:19:47.569 } 00:19:47.569 ] 00:19:47.569 }, 00:19:47.569 { 00:19:47.569 "subsystem": "nbd", 00:19:47.569 "config": [] 00:19:47.569 } 00:19:47.569 ] 00:19:47.569 }' 00:19:47.569 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.569 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.569 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.569 [2024-07-25 09:34:20.152606] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:47.569 [2024-07-25 09:34:20.152714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547326 ] 00:19:47.569 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.569 [2024-07-25 09:34:20.213799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.828 [2024-07-25 09:34:20.321063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.828 [2024-07-25 09:34:20.491991] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.828 [2024-07-25 09:34:20.492135] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:48.393 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.393 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:48.393 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.650 Running I/O for 10 seconds... 00:19:58.614 00:19:58.614 Latency(us) 00:19:58.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.614 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.614 Verification LBA range: start 0x0 length 0x2000 00:19:58.614 TLSTESTn1 : 10.02 3356.12 13.11 0.00 0.00 38082.91 7233.23 47380.10 00:19:58.614 =================================================================================================================== 00:19:58.614 Total : 3356.12 13.11 0.00 0.00 38082.91 7233.23 47380.10 00:19:58.614 0 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 547326 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 547326 ']' 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 547326 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 547326 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 547326' 00:19:58.614 killing process with pid 547326 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 547326 00:19:58.614 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.614 00:19:58.614 Latency(us) 00:19:58.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.614 =================================================================================================================== 00:19:58.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.614 [2024-07-25 09:34:31.263629] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.614 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 547326 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 547257 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 547257 ']' 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 547257 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 547257 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 547257' 00:19:58.872 killing process with pid 547257 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 547257 00:19:58.872 [2024-07-25 09:34:31.549081] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:58.872 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 547257 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=548749 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 548749 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 548749 ']' 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.130 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.130 [2024-07-25 09:34:31.863917] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:59.130 [2024-07-25 09:34:31.863995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.388 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.388 [2024-07-25 09:34:31.932512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.388 [2024-07-25 09:34:32.046666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.388 [2024-07-25 09:34:32.046730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.388 [2024-07-25 09:34:32.046747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.388 [2024-07-25 09:34:32.046767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.388 [2024-07-25 09:34:32.046780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.388 [2024-07-25 09:34:32.046812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ZEzMqqkb86 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZEzMqqkb86 00:19:59.646 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.904 [2024-07-25 09:34:32.423699] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.904 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.161 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.419 [2024-07-25 09:34:32.929062] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.419 [2024-07-25 09:34:32.929280] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.419 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.677 malloc0 00:20:00.677 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:00.934 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZEzMqqkb86 00:20:01.192 [2024-07-25 09:34:33.735091] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=549033 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 549033 /var/tmp/bdevperf.sock 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 549033 ']' 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.192 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.192 [2024-07-25 09:34:33.799931] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:01.192 [2024-07-25 09:34:33.800005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549033 ] 00:20:01.192 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.192 [2024-07-25 09:34:33.860617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.450 [2024-07-25 09:34:33.977068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.450 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.450 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:01.450 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZEzMqqkb86 00:20:01.708 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:01.966 [2024-07-25 09:34:34.650195] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.224 nvme0n1 00:20:02.224 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.224 Running I/O for 1 seconds... 00:20:03.596 00:20:03.596 Latency(us) 00:20:03.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.596 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.596 Verification LBA range: start 0x0 length 0x2000 00:20:03.596 nvme0n1 : 1.03 3277.33 12.80 0.00 0.00 38591.06 6359.42 38059.43 00:20:03.596 =================================================================================================================== 00:20:03.596 Total : 3277.33 12.80 0.00 0.00 38591.06 6359.42 38059.43 00:20:03.596 0 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 549033 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 549033 ']' 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 549033 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549033 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549033' 00:20:03.596 killing process with pid 549033 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 549033 00:20:03.596 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.596 00:20:03.596 Latency(us) 00:20:03.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.596 =================================================================================================================== 00:20:03.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.596 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 549033 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 548749 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 548749 ']' 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 548749 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 548749 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 548749' 00:20:03.596 killing process with pid 548749 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 548749 00:20:03.596 [2024-07-25 09:34:36.246538] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:03.596 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 548749 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=549318 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 549318 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 549318 ']' 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.854 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.111 [2024-07-25 09:34:36.589213] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:04.111 [2024-07-25 09:34:36.589292] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.111 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.111 [2024-07-25 09:34:36.651285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.111 [2024-07-25 09:34:36.755605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.111 [2024-07-25 09:34:36.755672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.111 [2024-07-25 09:34:36.755685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.111 [2024-07-25 09:34:36.755711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.111 [2024-07-25 09:34:36.755721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.111 [2024-07-25 09:34:36.755755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.368 [2024-07-25 09:34:36.889902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.368 malloc0 00:20:04.368 [2024-07-25 09:34:36.921673] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.368 [2024-07-25 09:34:36.941555] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=549346 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 549346 /var/tmp/bdevperf.sock 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 549346 ']' 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.368 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.368 [2024-07-25 09:34:37.010174] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:04.368 [2024-07-25 09:34:37.010260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549346 ] 00:20:04.368 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.368 [2024-07-25 09:34:37.075951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.625 [2024-07-25 09:34:37.194043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.625 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.625 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:04.625 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZEzMqqkb86 00:20:04.881 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:05.137 [2024-07-25 09:34:37.777069] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.137 nvme0n1 00:20:05.137 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.395 Running I/O for 1 seconds... 00:20:06.327 00:20:06.327 Latency(us) 00:20:06.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.327 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:06.327 Verification LBA range: start 0x0 length 0x2000 00:20:06.327 nvme0n1 : 1.02 3008.90 11.75 0.00 0.00 42090.22 8204.14 45438.29 00:20:06.327 =================================================================================================================== 00:20:06.327 Total : 3008.90 11.75 0.00 0.00 42090.22 8204.14 45438.29 00:20:06.327 0 00:20:06.327 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:06.327 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.327 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.584 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.584 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:06.584 "subsystems": [ 00:20:06.584 { 00:20:06.584 "subsystem": "keyring", 00:20:06.584 "config": [ 00:20:06.585 { 00:20:06.585 "method": "keyring_file_add_key", 00:20:06.585 "params": { 00:20:06.585 "name": "key0", 00:20:06.585 "path": "/tmp/tmp.ZEzMqqkb86" 00:20:06.585 } 00:20:06.585 } 00:20:06.585 ] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "iobuf", 00:20:06.585 "config": [ 00:20:06.585 { 00:20:06.585 "method": "iobuf_set_options", 00:20:06.585 "params": { 00:20:06.585 "small_pool_count": 8192, 00:20:06.585 "large_pool_count": 1024, 00:20:06.585 "small_bufsize": 8192, 00:20:06.585 "large_bufsize": 135168 00:20:06.585 } 00:20:06.585 } 00:20:06.585 ] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "sock", 00:20:06.585 "config": [ 00:20:06.585 { 00:20:06.585 "method": "sock_set_default_impl", 00:20:06.585 "params": { 00:20:06.585 "impl_name": "posix" 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "sock_impl_set_options", 00:20:06.585 "params": { 00:20:06.585 "impl_name": "ssl", 00:20:06.585 "recv_buf_size": 4096, 00:20:06.585 "send_buf_size": 4096, 00:20:06.585 "enable_recv_pipe": true, 00:20:06.585 "enable_quickack": false, 00:20:06.585 "enable_placement_id": 0, 00:20:06.585 "enable_zerocopy_send_server": true, 00:20:06.585 "enable_zerocopy_send_client": false, 00:20:06.585 "zerocopy_threshold": 0, 00:20:06.585 "tls_version": 0, 00:20:06.585 "enable_ktls": false 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "sock_impl_set_options", 00:20:06.585 "params": { 00:20:06.585 "impl_name": "posix", 00:20:06.585 "recv_buf_size": 2097152, 00:20:06.585 "send_buf_size": 2097152, 00:20:06.585 "enable_recv_pipe": true, 00:20:06.585 "enable_quickack": false, 00:20:06.585 "enable_placement_id": 0, 00:20:06.585 "enable_zerocopy_send_server": true, 00:20:06.585 "enable_zerocopy_send_client": false, 00:20:06.585 "zerocopy_threshold": 0, 00:20:06.585 "tls_version": 0, 00:20:06.585 "enable_ktls": false 00:20:06.585 } 00:20:06.585 } 00:20:06.585 ] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "vmd", 00:20:06.585 "config": [] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "accel", 00:20:06.585 "config": [ 00:20:06.585 { 00:20:06.585 "method": "accel_set_options", 00:20:06.585 "params": { 00:20:06.585 "small_cache_size": 128, 00:20:06.585 "large_cache_size": 16, 00:20:06.585 "task_count": 2048, 00:20:06.585 "sequence_count": 2048, 00:20:06.585 "buf_count": 2048 00:20:06.585 } 00:20:06.585 } 00:20:06.585 ] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "bdev", 00:20:06.585 "config": [ 00:20:06.585 { 00:20:06.585 "method": "bdev_set_options", 00:20:06.585 "params": { 00:20:06.585 "bdev_io_pool_size": 65535, 00:20:06.585 "bdev_io_cache_size": 256, 00:20:06.585 "bdev_auto_examine": true, 00:20:06.585 "iobuf_small_cache_size": 128, 00:20:06.585 "iobuf_large_cache_size": 16 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "bdev_raid_set_options", 00:20:06.585 "params": { 00:20:06.585 "process_window_size_kb": 1024, 00:20:06.585 "process_max_bandwidth_mb_sec": 0 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "bdev_iscsi_set_options", 00:20:06.585 "params": { 00:20:06.585 "timeout_sec": 30 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "bdev_nvme_set_options", 00:20:06.585 "params": { 00:20:06.585 "action_on_timeout": "none", 00:20:06.585 "timeout_us": 0, 00:20:06.585 "timeout_admin_us": 0, 00:20:06.585 "keep_alive_timeout_ms": 10000, 00:20:06.585 "arbitration_burst": 0, 00:20:06.585 "low_priority_weight": 0, 00:20:06.585 "medium_priority_weight": 0, 00:20:06.585 "high_priority_weight": 0, 00:20:06.585 "nvme_adminq_poll_period_us": 10000, 00:20:06.585 "nvme_ioq_poll_period_us": 0, 00:20:06.585 "io_queue_requests": 0, 00:20:06.585 "delay_cmd_submit": true, 00:20:06.585 "transport_retry_count": 4, 00:20:06.585 "bdev_retry_count": 3, 00:20:06.585 "transport_ack_timeout": 0, 00:20:06.585 "ctrlr_loss_timeout_sec": 0, 00:20:06.585 "reconnect_delay_sec": 0, 00:20:06.585 "fast_io_fail_timeout_sec": 0, 00:20:06.585 "disable_auto_failback": false, 00:20:06.585 "generate_uuids": false, 00:20:06.585 "transport_tos": 0, 00:20:06.585 "nvme_error_stat": false, 00:20:06.585 "rdma_srq_size": 0, 00:20:06.585 "io_path_stat": false, 00:20:06.585 "allow_accel_sequence": false, 00:20:06.585 "rdma_max_cq_size": 0, 00:20:06.585 "rdma_cm_event_timeout_ms": 0, 00:20:06.585 "dhchap_digests": [ 00:20:06.585 "sha256", 00:20:06.585 "sha384", 00:20:06.585 "sha512" 00:20:06.585 ], 00:20:06.585 "dhchap_dhgroups": [ 00:20:06.585 "null", 00:20:06.585 "ffdhe2048", 00:20:06.585 "ffdhe3072", 00:20:06.585 "ffdhe4096", 00:20:06.585 "ffdhe6144", 00:20:06.585 "ffdhe8192" 00:20:06.585 ] 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "bdev_nvme_set_hotplug", 00:20:06.585 "params": { 00:20:06.585 "period_us": 100000, 00:20:06.585 "enable": false 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "bdev_malloc_create", 00:20:06.585 "params": { 00:20:06.585 "name": "malloc0", 00:20:06.585 "num_blocks": 8192, 00:20:06.585 "block_size": 4096, 00:20:06.585 "physical_block_size": 4096, 00:20:06.585 "uuid": "aa14ac45-e21a-446f-b182-79990dd5850d", 00:20:06.585 "optimal_io_boundary": 0, 00:20:06.585 "md_size": 0, 00:20:06.585 "dif_type": 0, 00:20:06.585 "dif_is_head_of_md": false, 00:20:06.585 "dif_pi_format": 0 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "bdev_wait_for_examine" 00:20:06.585 } 00:20:06.585 ] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "nbd", 00:20:06.585 "config": [] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "scheduler", 00:20:06.585 "config": [ 00:20:06.585 { 00:20:06.585 "method": "framework_set_scheduler", 00:20:06.585 "params": { 00:20:06.585 "name": "static" 00:20:06.585 } 00:20:06.585 } 00:20:06.585 ] 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "subsystem": "nvmf", 00:20:06.585 "config": [ 00:20:06.585 { 00:20:06.585 "method": "nvmf_set_config", 00:20:06.585 "params": { 00:20:06.585 "discovery_filter": "match_any", 00:20:06.585 "admin_cmd_passthru": { 00:20:06.585 "identify_ctrlr": false 00:20:06.585 } 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "nvmf_set_max_subsystems", 00:20:06.585 "params": { 00:20:06.585 "max_subsystems": 1024 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "nvmf_set_crdt", 00:20:06.585 "params": { 00:20:06.585 "crdt1": 0, 00:20:06.585 "crdt2": 0, 00:20:06.585 "crdt3": 0 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "nvmf_create_transport", 00:20:06.585 "params": { 00:20:06.585 "trtype": "TCP", 00:20:06.585 "max_queue_depth": 128, 00:20:06.585 "max_io_qpairs_per_ctrlr": 127, 00:20:06.585 "in_capsule_data_size": 4096, 00:20:06.585 "max_io_size": 131072, 00:20:06.585 "io_unit_size": 131072, 00:20:06.585 "max_aq_depth": 128, 00:20:06.585 "num_shared_buffers": 511, 00:20:06.585 "buf_cache_size": 4294967295, 00:20:06.585 "dif_insert_or_strip": false, 00:20:06.585 "zcopy": false, 00:20:06.585 "c2h_success": false, 00:20:06.585 "sock_priority": 0, 00:20:06.585 "abort_timeout_sec": 1, 00:20:06.585 "ack_timeout": 0, 00:20:06.585 "data_wr_pool_size": 0 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "nvmf_create_subsystem", 00:20:06.585 "params": { 00:20:06.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.585 "allow_any_host": false, 00:20:06.585 "serial_number": "00000000000000000000", 00:20:06.585 "model_number": "SPDK bdev Controller", 00:20:06.585 "max_namespaces": 32, 00:20:06.585 "min_cntlid": 1, 00:20:06.585 "max_cntlid": 65519, 00:20:06.585 "ana_reporting": false 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "nvmf_subsystem_add_host", 00:20:06.585 "params": { 00:20:06.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.585 "host": "nqn.2016-06.io.spdk:host1", 00:20:06.585 "psk": "key0" 00:20:06.585 } 00:20:06.585 }, 00:20:06.585 { 00:20:06.585 "method": "nvmf_subsystem_add_ns", 00:20:06.585 "params": { 00:20:06.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.585 "namespace": { 00:20:06.585 "nsid": 1, 00:20:06.586 "bdev_name": "malloc0", 00:20:06.586 "nguid": "AA14AC45E21A446FB18279990DD5850D", 00:20:06.586 "uuid": "aa14ac45-e21a-446f-b182-79990dd5850d", 00:20:06.586 "no_auto_visible": false 00:20:06.586 } 00:20:06.586 } 00:20:06.586 }, 00:20:06.586 { 00:20:06.586 "method": "nvmf_subsystem_add_listener", 00:20:06.586 "params": { 00:20:06.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.586 "listen_address": { 00:20:06.586 "trtype": "TCP", 00:20:06.586 "adrfam": "IPv4", 00:20:06.586 "traddr": "10.0.0.2", 00:20:06.586 "trsvcid": "4420" 00:20:06.586 }, 00:20:06.586 "secure_channel": false, 00:20:06.586 "sock_impl": "ssl" 00:20:06.586 } 00:20:06.586 } 00:20:06.586 ] 00:20:06.586 } 00:20:06.586 ] 00:20:06.586 }' 00:20:06.586 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:06.844 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:06.844 "subsystems": [ 00:20:06.844 { 00:20:06.844 "subsystem": "keyring", 00:20:06.844 "config": [ 00:20:06.844 { 00:20:06.844 "method": "keyring_file_add_key", 00:20:06.844 "params": { 00:20:06.844 "name": "key0", 00:20:06.844 "path": "/tmp/tmp.ZEzMqqkb86" 00:20:06.844 } 00:20:06.844 } 00:20:06.844 ] 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "subsystem": "iobuf", 00:20:06.844 "config": [ 00:20:06.844 { 00:20:06.844 "method": "iobuf_set_options", 00:20:06.844 "params": { 00:20:06.844 "small_pool_count": 8192, 00:20:06.844 "large_pool_count": 1024, 00:20:06.844 "small_bufsize": 8192, 00:20:06.844 "large_bufsize": 135168 00:20:06.844 } 00:20:06.844 } 00:20:06.844 ] 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "subsystem": "sock", 00:20:06.844 "config": [ 00:20:06.844 { 00:20:06.844 "method": "sock_set_default_impl", 00:20:06.844 "params": { 00:20:06.844 "impl_name": "posix" 00:20:06.844 } 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "method": "sock_impl_set_options", 00:20:06.844 "params": { 00:20:06.844 "impl_name": "ssl", 00:20:06.844 "recv_buf_size": 4096, 00:20:06.844 "send_buf_size": 4096, 00:20:06.844 "enable_recv_pipe": true, 00:20:06.844 "enable_quickack": false, 00:20:06.844 "enable_placement_id": 0, 00:20:06.844 "enable_zerocopy_send_server": true, 00:20:06.844 "enable_zerocopy_send_client": false, 00:20:06.844 "zerocopy_threshold": 0, 00:20:06.844 "tls_version": 0, 00:20:06.844 "enable_ktls": false 00:20:06.844 } 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "method": "sock_impl_set_options", 00:20:06.844 "params": { 00:20:06.844 "impl_name": "posix", 00:20:06.844 "recv_buf_size": 2097152, 00:20:06.844 "send_buf_size": 2097152, 00:20:06.844 "enable_recv_pipe": true, 00:20:06.844 "enable_quickack": false, 00:20:06.844 "enable_placement_id": 0, 00:20:06.844 "enable_zerocopy_send_server": true, 00:20:06.844 "enable_zerocopy_send_client": false, 00:20:06.844 "zerocopy_threshold": 0, 00:20:06.844 "tls_version": 0, 00:20:06.844 "enable_ktls": false 00:20:06.844 } 00:20:06.844 } 00:20:06.844 ] 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "subsystem": "vmd", 00:20:06.844 "config": [] 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "subsystem": "accel", 00:20:06.844 "config": [ 00:20:06.844 { 00:20:06.844 "method": "accel_set_options", 00:20:06.844 "params": { 00:20:06.844 "small_cache_size": 128, 00:20:06.844 "large_cache_size": 16, 00:20:06.844 "task_count": 2048, 00:20:06.844 "sequence_count": 2048, 00:20:06.844 "buf_count": 2048 00:20:06.844 } 00:20:06.844 } 00:20:06.844 ] 00:20:06.844 }, 00:20:06.844 { 00:20:06.844 "subsystem": "bdev", 00:20:06.844 "config": [ 00:20:06.844 { 00:20:06.845 "method": "bdev_set_options", 00:20:06.845 "params": { 00:20:06.845 "bdev_io_pool_size": 65535, 00:20:06.845 "bdev_io_cache_size": 256, 00:20:06.845 "bdev_auto_examine": true, 00:20:06.845 "iobuf_small_cache_size": 128, 00:20:06.845 "iobuf_large_cache_size": 16 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_raid_set_options", 00:20:06.845 "params": { 00:20:06.845 "process_window_size_kb": 1024, 00:20:06.845 "process_max_bandwidth_mb_sec": 0 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_iscsi_set_options", 00:20:06.845 "params": { 00:20:06.845 "timeout_sec": 30 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_nvme_set_options", 00:20:06.845 "params": { 00:20:06.845 "action_on_timeout": "none", 00:20:06.845 "timeout_us": 0, 00:20:06.845 "timeout_admin_us": 0, 00:20:06.845 "keep_alive_timeout_ms": 10000, 00:20:06.845 "arbitration_burst": 0, 00:20:06.845 "low_priority_weight": 0, 00:20:06.845 "medium_priority_weight": 0, 00:20:06.845 "high_priority_weight": 0, 00:20:06.845 "nvme_adminq_poll_period_us": 10000, 00:20:06.845 "nvme_ioq_poll_period_us": 0, 00:20:06.845 "io_queue_requests": 512, 00:20:06.845 "delay_cmd_submit": true, 00:20:06.845 "transport_retry_count": 4, 00:20:06.845 "bdev_retry_count": 3, 00:20:06.845 "transport_ack_timeout": 0, 00:20:06.845 "ctrlr_loss_timeout_sec": 0, 00:20:06.845 "reconnect_delay_sec": 0, 00:20:06.845 "fast_io_fail_timeout_sec": 0, 00:20:06.845 "disable_auto_failback": false, 00:20:06.845 "generate_uuids": false, 00:20:06.845 "transport_tos": 0, 00:20:06.845 "nvme_error_stat": false, 00:20:06.845 "rdma_srq_size": 0, 00:20:06.845 "io_path_stat": false, 00:20:06.845 "allow_accel_sequence": false, 00:20:06.845 "rdma_max_cq_size": 0, 00:20:06.845 "rdma_cm_event_timeout_ms": 0, 00:20:06.845 "dhchap_digests": [ 00:20:06.845 "sha256", 00:20:06.845 "sha384", 00:20:06.845 "sha512" 00:20:06.845 ], 00:20:06.845 "dhchap_dhgroups": [ 00:20:06.845 "null", 00:20:06.845 "ffdhe2048", 00:20:06.845 "ffdhe3072", 00:20:06.845 "ffdhe4096", 00:20:06.845 "ffdhe6144", 00:20:06.845 "ffdhe8192" 00:20:06.845 ] 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_nvme_attach_controller", 00:20:06.845 "params": { 00:20:06.845 "name": "nvme0", 00:20:06.845 "trtype": "TCP", 00:20:06.845 "adrfam": "IPv4", 00:20:06.845 "traddr": "10.0.0.2", 00:20:06.845 "trsvcid": "4420", 00:20:06.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.845 "prchk_reftag": false, 00:20:06.845 "prchk_guard": false, 00:20:06.845 "ctrlr_loss_timeout_sec": 0, 00:20:06.845 "reconnect_delay_sec": 0, 00:20:06.845 "fast_io_fail_timeout_sec": 0, 00:20:06.845 "psk": "key0", 00:20:06.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.845 "hdgst": false, 00:20:06.845 "ddgst": false 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_nvme_set_hotplug", 00:20:06.845 "params": { 00:20:06.845 "period_us": 100000, 00:20:06.845 "enable": false 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_enable_histogram", 00:20:06.845 "params": { 00:20:06.845 "name": "nvme0n1", 00:20:06.845 "enable": true 00:20:06.845 } 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "method": "bdev_wait_for_examine" 00:20:06.845 } 00:20:06.845 ] 00:20:06.845 }, 00:20:06.845 { 00:20:06.845 "subsystem": "nbd", 00:20:06.845 "config": [] 00:20:06.845 } 00:20:06.845 ] 00:20:06.845 }' 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 549346 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 549346 ']' 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 549346 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549346 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549346' 00:20:06.845 killing process with pid 549346 00:20:06.845 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 549346 00:20:06.845 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.845 00:20:06.845 Latency(us) 00:20:06.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.846 =================================================================================================================== 00:20:06.846 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.846 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 549346 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 549318 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 549318 ']' 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 549318 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549318 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549318' 00:20:07.103 killing process with pid 549318 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 549318 00:20:07.103 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 549318 00:20:07.361 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:07.361 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.361 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:07.361 "subsystems": [ 00:20:07.361 { 00:20:07.361 "subsystem": "keyring", 00:20:07.361 "config": [ 00:20:07.361 { 00:20:07.361 "method": "keyring_file_add_key", 00:20:07.361 "params": { 00:20:07.361 "name": "key0", 00:20:07.361 "path": "/tmp/tmp.ZEzMqqkb86" 00:20:07.361 } 00:20:07.361 } 00:20:07.361 ] 00:20:07.361 }, 00:20:07.361 { 00:20:07.361 "subsystem": "iobuf", 00:20:07.361 "config": [ 00:20:07.361 { 00:20:07.361 "method": "iobuf_set_options", 00:20:07.361 "params": { 00:20:07.361 "small_pool_count": 8192, 00:20:07.361 "large_pool_count": 1024, 00:20:07.361 "small_bufsize": 8192, 00:20:07.361 "large_bufsize": 135168 00:20:07.361 } 00:20:07.361 } 00:20:07.361 ] 00:20:07.361 }, 00:20:07.361 { 00:20:07.361 "subsystem": "sock", 00:20:07.361 "config": [ 00:20:07.361 { 00:20:07.361 "method": "sock_set_default_impl", 00:20:07.361 "params": { 00:20:07.361 "impl_name": "posix" 00:20:07.361 } 00:20:07.361 }, 00:20:07.361 { 00:20:07.361 "method": "sock_impl_set_options", 00:20:07.361 "params": { 00:20:07.361 "impl_name": "ssl", 00:20:07.361 "recv_buf_size": 4096, 00:20:07.361 "send_buf_size": 4096, 00:20:07.361 "enable_recv_pipe": true, 00:20:07.361 "enable_quickack": false, 00:20:07.361 "enable_placement_id": 0, 00:20:07.361 "enable_zerocopy_send_server": true, 00:20:07.361 "enable_zerocopy_send_client": false, 00:20:07.361 "zerocopy_threshold": 0, 00:20:07.361 "tls_version": 0, 00:20:07.361 "enable_ktls": false 00:20:07.361 } 00:20:07.361 }, 00:20:07.361 { 00:20:07.361 "method": "sock_impl_set_options", 00:20:07.361 "params": { 00:20:07.361 "impl_name": "posix", 00:20:07.361 "recv_buf_size": 2097152, 00:20:07.361 "send_buf_size": 2097152, 00:20:07.361 "enable_recv_pipe": true, 00:20:07.361 "enable_quickack": false, 00:20:07.361 "enable_placement_id": 0, 00:20:07.361 "enable_zerocopy_send_server": true, 00:20:07.361 "enable_zerocopy_send_client": false, 00:20:07.361 "zerocopy_threshold": 0, 00:20:07.361 "tls_version": 0, 00:20:07.361 "enable_ktls": false 00:20:07.361 } 00:20:07.361 } 00:20:07.361 ] 00:20:07.361 }, 00:20:07.361 { 00:20:07.362 "subsystem": "vmd", 00:20:07.362 "config": [] 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "subsystem": "accel", 00:20:07.362 "config": [ 00:20:07.362 { 00:20:07.362 "method": "accel_set_options", 00:20:07.362 "params": { 00:20:07.362 "small_cache_size": 128, 00:20:07.362 "large_cache_size": 16, 00:20:07.362 "task_count": 2048, 00:20:07.362 "sequence_count": 2048, 00:20:07.362 "buf_count": 2048 00:20:07.362 } 00:20:07.362 } 00:20:07.362 ] 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "subsystem": "bdev", 00:20:07.362 "config": [ 00:20:07.362 { 00:20:07.362 "method": "bdev_set_options", 00:20:07.362 "params": { 00:20:07.362 "bdev_io_pool_size": 65535, 00:20:07.362 "bdev_io_cache_size": 256, 00:20:07.362 "bdev_auto_examine": true, 00:20:07.362 "iobuf_small_cache_size": 128, 00:20:07.362 "iobuf_large_cache_size": 16 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "bdev_raid_set_options", 00:20:07.362 "params": { 00:20:07.362 "process_window_size_kb": 1024, 00:20:07.362 "process_max_bandwidth_mb_sec": 0 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "bdev_iscsi_set_options", 00:20:07.362 "params": { 00:20:07.362 "timeout_sec": 30 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "bdev_nvme_set_options", 00:20:07.362 "params": { 00:20:07.362 "action_on_timeout": "none", 00:20:07.362 "timeout_us": 0, 00:20:07.362 "timeout_admin_us": 0, 00:20:07.362 "keep_alive_timeout_ms": 10000, 00:20:07.362 "arbitration_burst": 0, 00:20:07.362 "low_priority_weight": 0, 00:20:07.362 "medium_priority_weight": 0, 00:20:07.362 "high_priority_weight": 0, 00:20:07.362 "nvme_adminq_poll_period_us": 10000, 00:20:07.362 "nvme_ioq_poll_period_us": 0, 00:20:07.362 "io_queue_requests": 0, 00:20:07.362 "delay_cmd_submit": true, 00:20:07.362 "transport_retry_count": 4, 00:20:07.362 "bdev_retry_count": 3, 00:20:07.362 "transport_ack_timeout": 0, 00:20:07.362 "ctrlr_loss_timeout_sec": 0, 00:20:07.362 "reconnect_delay_sec": 0, 00:20:07.362 "fast_io_fail_timeout_sec": 0, 00:20:07.362 "disable_auto_failback": false, 00:20:07.362 "generate_uuids": false, 00:20:07.362 "transport_tos": 0, 00:20:07.362 "nvme_error_stat": false, 00:20:07.362 "rdma_srq_size": 0, 00:20:07.362 "io_path_stat": false, 00:20:07.362 "allow_accel_sequence": false, 00:20:07.362 "rdma_max_cq_size": 0, 00:20:07.362 "rdma_cm_event_timeout_ms": 0, 00:20:07.362 "dhchap_digests": [ 00:20:07.362 "sha256", 00:20:07.362 "sha384", 00:20:07.362 "sha512" 00:20:07.362 ], 00:20:07.362 "dhchap_dhgroups": [ 00:20:07.362 "null", 00:20:07.362 "ffdhe2048", 00:20:07.362 "ffdhe3072", 00:20:07.362 "ffdhe4096", 00:20:07.362 "ffdhe6144", 00:20:07.362 "ffdhe8192" 00:20:07.362 ] 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "bdev_nvme_set_hotplug", 00:20:07.362 "params": { 00:20:07.362 "period_us": 100000, 00:20:07.362 "enable": false 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "bdev_malloc_create", 00:20:07.362 "params": { 00:20:07.362 "name": "malloc0", 00:20:07.362 "num_blocks": 8192, 00:20:07.362 "block_size": 4096, 00:20:07.362 "physical_block_size": 4096, 00:20:07.362 "uuid": "aa14ac45-e21a-446f-b182-79990dd5850d", 00:20:07.362 "optimal_io_boundary": 0, 00:20:07.362 "md_size": 0, 00:20:07.362 "dif_type": 0, 00:20:07.362 "dif_is_head_of_md": false, 00:20:07.362 "dif_pi_format": 0 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "bdev_wait_for_examine" 00:20:07.362 } 00:20:07.362 ] 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "subsystem": "nbd", 00:20:07.362 "config": [] 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "subsystem": "scheduler", 00:20:07.362 "config": [ 00:20:07.362 { 00:20:07.362 "method": "framework_set_scheduler", 00:20:07.362 "params": { 00:20:07.362 "name": "static" 00:20:07.362 } 00:20:07.362 } 00:20:07.362 ] 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "subsystem": "nvmf", 00:20:07.362 "config": [ 00:20:07.362 { 00:20:07.362 "method": "nvmf_set_config", 00:20:07.362 "params": { 00:20:07.362 "discovery_filter": "match_any", 00:20:07.362 "admin_cmd_passthru": { 00:20:07.362 "identify_ctrlr": false 00:20:07.362 } 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "nvmf_set_max_subsystems", 00:20:07.362 "params": { 00:20:07.362 "max_subsystems": 1024 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "nvmf_set_crdt", 00:20:07.362 "params": { 00:20:07.362 "crdt1": 0, 00:20:07.362 "crdt2": 0, 00:20:07.362 "crdt3": 0 00:20:07.362 } 00:20:07.362 }, 00:20:07.362 { 00:20:07.362 "method": "nvmf_create_transport", 00:20:07.362 "params": { 00:20:07.362 "trtype": "TCP", 00:20:07.362 "max_queue_depth": 128, 00:20:07.363 "max_io_qpairs_per_ctrlr": 127, 00:20:07.363 "in_capsule_data_size": 4096, 00:20:07.363 "max_io_size": 131072, 00:20:07.363 "io_unit_size": 131072, 00:20:07.363 "max_aq_depth": 128, 00:20:07.363 "num_shared_buffers": 511, 00:20:07.363 "buf_cache_size": 4294967295, 00:20:07.363 "dif_insert_or_strip": false, 00:20:07.363 "zcopy": false, 00:20:07.363 "c2h_success": false, 00:20:07.363 "sock_priority": 0, 00:20:07.363 "abort_timeout_sec": 1, 00:20:07.363 "ack_timeout": 0, 00:20:07.363 "data_wr_pool_size": 0 00:20:07.363 } 00:20:07.363 }, 00:20:07.363 { 00:20:07.363 "method": "nvmf_create_subsystem", 00:20:07.363 "params": { 00:20:07.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.363 "allow_any_host": false, 00:20:07.363 "serial_number": "00000000000000000000", 00:20:07.363 "model_number": "SPDK bdev Controller", 00:20:07.363 "max_namespaces": 32, 00:20:07.363 "min_cntlid": 1, 00:20:07.363 "max_cntlid": 65519, 00:20:07.363 "ana_reporting": false 00:20:07.363 } 00:20:07.363 }, 00:20:07.363 { 00:20:07.363 "method": "nvmf_subsystem_add_host", 00:20:07.363 "params": { 00:20:07.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.363 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.363 "psk": "key0" 00:20:07.363 } 00:20:07.363 }, 00:20:07.363 { 00:20:07.363 "method": "nvmf_subsystem_add_ns", 00:20:07.363 "params": { 00:20:07.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.363 "namespace": { 00:20:07.363 "nsid": 1, 00:20:07.363 "bdev_name": "malloc0", 00:20:07.363 "nguid": "AA14AC45E21A446FB18279990DD5850D", 00:20:07.363 "uuid": "aa14ac45-e21a-446f-b182-79990dd5850d", 00:20:07.363 "no_auto_visible": false 00:20:07.363 } 00:20:07.363 } 00:20:07.363 }, 00:20:07.363 { 00:20:07.363 "method": "nvmf_subsystem_add_listener", 00:20:07.363 "params": { 00:20:07.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.363 "listen_address": { 00:20:07.363 "trtype": "TCP", 00:20:07.363 "adrfam": "IPv4", 00:20:07.363 "traddr": "10.0.0.2", 00:20:07.363 "trsvcid": "4420" 00:20:07.363 }, 00:20:07.363 "secure_channel": false, 00:20:07.363 "sock_impl": "ssl" 00:20:07.363 } 00:20:07.363 } 00:20:07.363 ] 00:20:07.363 } 00:20:07.363 ] 00:20:07.363 }' 00:20:07.363 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.363 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=549754 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 549754 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 549754 ']' 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.621 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.621 [2024-07-25 09:34:40.147972] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:07.621 [2024-07-25 09:34:40.148056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.621 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.621 [2024-07-25 09:34:40.222606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.621 [2024-07-25 09:34:40.335597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.621 [2024-07-25 09:34:40.335650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.621 [2024-07-25 09:34:40.335680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.621 [2024-07-25 09:34:40.335692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.621 [2024-07-25 09:34:40.335703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.621 [2024-07-25 09:34:40.335796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.879 [2024-07-25 09:34:40.573898] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.136 [2024-07-25 09:34:40.618130] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.136 [2024-07-25 09:34:40.618380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=549905 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 549905 /var/tmp/bdevperf.sock 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 549905 ']' 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.701 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:08.701 "subsystems": [ 00:20:08.701 { 00:20:08.701 "subsystem": "keyring", 00:20:08.701 "config": [ 00:20:08.701 { 00:20:08.701 "method": "keyring_file_add_key", 00:20:08.701 "params": { 00:20:08.701 "name": "key0", 00:20:08.702 "path": "/tmp/tmp.ZEzMqqkb86" 00:20:08.702 } 00:20:08.702 } 00:20:08.702 ] 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "subsystem": "iobuf", 00:20:08.702 "config": [ 00:20:08.702 { 00:20:08.702 "method": "iobuf_set_options", 00:20:08.702 "params": { 00:20:08.702 "small_pool_count": 8192, 00:20:08.702 "large_pool_count": 1024, 00:20:08.702 "small_bufsize": 8192, 00:20:08.702 "large_bufsize": 135168 00:20:08.702 } 00:20:08.702 } 00:20:08.702 ] 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "subsystem": "sock", 00:20:08.702 "config": [ 00:20:08.702 { 00:20:08.702 "method": "sock_set_default_impl", 00:20:08.702 "params": { 00:20:08.702 "impl_name": "posix" 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "sock_impl_set_options", 00:20:08.702 "params": { 00:20:08.702 "impl_name": "ssl", 00:20:08.702 "recv_buf_size": 4096, 00:20:08.702 "send_buf_size": 4096, 00:20:08.702 "enable_recv_pipe": true, 00:20:08.702 "enable_quickack": false, 00:20:08.702 "enable_placement_id": 0, 00:20:08.702 "enable_zerocopy_send_server": true, 00:20:08.702 "enable_zerocopy_send_client": false, 00:20:08.702 "zerocopy_threshold": 0, 00:20:08.702 "tls_version": 0, 00:20:08.702 "enable_ktls": false 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "sock_impl_set_options", 00:20:08.702 "params": { 00:20:08.702 "impl_name": "posix", 00:20:08.702 "recv_buf_size": 2097152, 00:20:08.702 "send_buf_size": 2097152, 00:20:08.702 "enable_recv_pipe": true, 00:20:08.702 "enable_quickack": false, 00:20:08.702 "enable_placement_id": 0, 00:20:08.702 "enable_zerocopy_send_server": true, 00:20:08.702 "enable_zerocopy_send_client": false, 00:20:08.702 "zerocopy_threshold": 0, 00:20:08.702 "tls_version": 0, 00:20:08.702 "enable_ktls": false 00:20:08.702 } 00:20:08.702 } 00:20:08.702 ] 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "subsystem": "vmd", 00:20:08.702 "config": [] 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "subsystem": "accel", 00:20:08.702 "config": [ 00:20:08.702 { 00:20:08.702 "method": "accel_set_options", 00:20:08.702 "params": { 00:20:08.702 "small_cache_size": 128, 00:20:08.702 "large_cache_size": 16, 00:20:08.702 "task_count": 2048, 00:20:08.702 "sequence_count": 2048, 00:20:08.702 "buf_count": 2048 00:20:08.702 } 00:20:08.702 } 00:20:08.702 ] 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "subsystem": "bdev", 00:20:08.702 "config": [ 00:20:08.702 { 00:20:08.702 "method": "bdev_set_options", 00:20:08.702 "params": { 00:20:08.702 "bdev_io_pool_size": 65535, 00:20:08.702 "bdev_io_cache_size": 256, 00:20:08.702 "bdev_auto_examine": true, 00:20:08.702 "iobuf_small_cache_size": 128, 00:20:08.702 "iobuf_large_cache_size": 16 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_raid_set_options", 00:20:08.702 "params": { 00:20:08.702 "process_window_size_kb": 1024, 00:20:08.702 "process_max_bandwidth_mb_sec": 0 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_iscsi_set_options", 00:20:08.702 "params": { 00:20:08.702 "timeout_sec": 30 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_nvme_set_options", 00:20:08.702 "params": { 00:20:08.702 "action_on_timeout": "none", 00:20:08.702 "timeout_us": 0, 00:20:08.702 "timeout_admin_us": 0, 00:20:08.702 "keep_alive_timeout_ms": 10000, 00:20:08.702 "arbitration_burst": 0, 00:20:08.702 "low_priority_weight": 0, 00:20:08.702 "medium_priority_weight": 0, 00:20:08.702 "high_priority_weight": 0, 00:20:08.702 "nvme_adminq_poll_period_us": 10000, 00:20:08.702 "nvme_ioq_poll_period_us": 0, 00:20:08.702 "io_queue_requests": 512, 00:20:08.702 "delay_cmd_submit": true, 00:20:08.702 "transport_retry_count": 4, 00:20:08.702 "bdev_retry_count": 3, 00:20:08.702 "transport_ack_timeout": 0, 00:20:08.702 "ctrlr_loss_timeout_sec": 0, 00:20:08.702 "reconnect_delay_sec": 0, 00:20:08.702 "fast_io_fail_timeout_sec": 0, 00:20:08.702 "disable_auto_failback": false, 00:20:08.702 "generate_uuids": false, 00:20:08.702 "transport_tos": 0, 00:20:08.702 "nvme_error_stat": false, 00:20:08.702 "rdma_srq_size": 0, 00:20:08.702 "io_path_stat": false, 00:20:08.702 "allow_accel_sequence": false, 00:20:08.702 "rdma_max_cq_size": 0, 00:20:08.702 "rdma_cm_event_timeout_ms": 0, 00:20:08.702 "dhchap_digests": [ 00:20:08.702 "sha256", 00:20:08.702 "sha384", 00:20:08.702 "sha512" 00:20:08.702 ], 00:20:08.702 "dhchap_dhgroups": [ 00:20:08.702 "null", 00:20:08.702 "ffdhe2048", 00:20:08.702 "ffdhe3072", 00:20:08.702 "ffdhe4096", 00:20:08.702 "ffdhe6144", 00:20:08.702 "ffdhe8192" 00:20:08.702 ] 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_nvme_attach_controller", 00:20:08.702 "params": { 00:20:08.702 "name": "nvme0", 00:20:08.702 "trtype": "TCP", 00:20:08.702 "adrfam": "IPv4", 00:20:08.702 "traddr": "10.0.0.2", 00:20:08.702 "trsvcid": "4420", 00:20:08.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.702 "prchk_reftag": false, 00:20:08.702 "prchk_guard": false, 00:20:08.702 "ctrlr_loss_timeout_sec": 0, 00:20:08.702 "reconnect_delay_sec": 0, 00:20:08.702 "fast_io_fail_timeout_sec": 0, 00:20:08.702 "psk": "key0", 00:20:08.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.702 "hdgst": false, 00:20:08.702 "ddgst": false 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_nvme_set_hotplug", 00:20:08.702 "params": { 00:20:08.702 "period_us": 100000, 00:20:08.702 "enable": false 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_enable_histogram", 00:20:08.702 "params": { 00:20:08.702 "name": "nvme0n1", 00:20:08.702 "enable": true 00:20:08.702 } 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "method": "bdev_wait_for_examine" 00:20:08.702 } 00:20:08.702 ] 00:20:08.702 }, 00:20:08.702 { 00:20:08.702 "subsystem": "nbd", 00:20:08.702 "config": [] 00:20:08.702 } 00:20:08.702 ] 00:20:08.702 }' 00:20:08.702 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.702 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.702 [2024-07-25 09:34:41.211347] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:08.702 [2024-07-25 09:34:41.211431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549905 ] 00:20:08.702 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.702 [2024-07-25 09:34:41.272522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.702 [2024-07-25 09:34:41.387940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.961 [2024-07-25 09:34:41.571376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.527 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.527 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:09.527 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:09.527 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:09.784 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.784 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.042 Running I/O for 1 seconds... 00:20:10.977 00:20:10.977 Latency(us) 00:20:10.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.977 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:10.977 Verification LBA range: start 0x0 length 0x2000 00:20:10.977 nvme0n1 : 1.02 3139.07 12.26 0.00 0.00 40340.69 7281.78 47768.46 00:20:10.977 =================================================================================================================== 00:20:10.977 Total : 3139.07 12.26 0.00 0.00 40340.69 7281.78 47768.46 00:20:10.977 0 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:10.977 nvmf_trace.0 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 549905 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 549905 ']' 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 549905 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549905 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549905' 00:20:10.977 killing process with pid 549905 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 549905 00:20:10.977 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.977 00:20:10.977 Latency(us) 00:20:10.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.977 =================================================================================================================== 00:20:10.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.977 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 549905 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.234 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.234 rmmod nvme_tcp 00:20:11.491 rmmod nvme_fabrics 00:20:11.491 rmmod nvme_keyring 00:20:11.491 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 549754 ']' 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 549754 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 549754 ']' 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 549754 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549754 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549754' 00:20:11.491 killing process with pid 549754 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 549754 00:20:11.491 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 549754 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.750 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.650 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.650 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.U8H88AhJdZ /tmp/tmp.MmHETCEk7n /tmp/tmp.ZEzMqqkb86 00:20:13.650 00:20:13.650 real 1m20.310s 00:20:13.650 user 2m7.954s 00:20:13.650 sys 0m28.219s 00:20:13.650 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.650 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.650 ************************************ 00:20:13.650 END TEST nvmf_tls 00:20:13.650 ************************************ 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 ************************************ 00:20:13.909 START TEST nvmf_fips 00:20:13.909 ************************************ 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:13.909 * Looking for test storage... 00:20:13.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:13.909 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:13.910 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:14.168 Error setting digest 00:20:14.168 0002515FF57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:14.168 0002515FF57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.168 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:16.068 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:16.068 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.068 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:16.069 Found net devices under 0000:82:00.0: cvl_0_0 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:16.069 Found net devices under 0000:82:00.1: cvl_0_1 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.069 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:20:16.327 00:20:16.327 --- 10.0.0.2 ping statistics --- 00:20:16.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.327 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:20:16.327 00:20:16.327 --- 10.0.0.1 ping statistics --- 00:20:16.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.327 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=552266 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 552266 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 552266 ']' 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.327 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:16.327 [2024-07-25 09:34:48.922854] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:16.327 [2024-07-25 09:34:48.922947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.327 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.327 [2024-07-25 09:34:48.985091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.585 [2024-07-25 09:34:49.091323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.585 [2024-07-25 09:34:49.091396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.585 [2024-07-25 09:34:49.091411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.585 [2024-07-25 09:34:49.091423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.585 [2024-07-25 09:34:49.091433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.585 [2024-07-25 09:34:49.091458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.517 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:17.517 [2024-07-25 09:34:50.203219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.517 [2024-07-25 09:34:50.219219] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.517 [2024-07-25 09:34:50.219455] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.517 [2024-07-25 09:34:50.250757] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:17.774 malloc0 00:20:17.774 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.774 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=552429 00:20:17.774 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 552429 /var/tmp/bdevperf.sock 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 552429 ']' 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.775 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.775 [2024-07-25 09:34:50.336222] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:17.775 [2024-07-25 09:34:50.336304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552429 ] 00:20:17.775 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.775 [2024-07-25 09:34:50.395294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.775 [2024-07-25 09:34:50.502291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.032 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.032 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:18.032 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:18.290 [2024-07-25 09:34:50.837118] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.290 [2024-07-25 09:34:50.837241] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.290 TLSTESTn1 00:20:18.290 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.548 Running I/O for 10 seconds... 00:20:28.636 00:20:28.636 Latency(us) 00:20:28.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.636 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:28.636 Verification LBA range: start 0x0 length 0x2000 00:20:28.636 TLSTESTn1 : 10.02 3331.93 13.02 0.00 0.00 38355.15 6310.87 39224.51 00:20:28.636 =================================================================================================================== 00:20:28.636 Total : 3331.93 13.02 0.00 0.00 38355.15 6310.87 39224.51 00:20:28.636 0 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:28.636 nvmf_trace.0 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 552429 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 552429 ']' 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 552429 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 552429 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 552429' 00:20:28.636 killing process with pid 552429 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 552429 00:20:28.636 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.636 00:20:28.636 Latency(us) 00:20:28.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.636 =================================================================================================================== 00:20:28.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.636 [2024-07-25 09:35:01.190466] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:28.636 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 552429 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.893 rmmod nvme_tcp 00:20:28.893 rmmod nvme_fabrics 00:20:28.893 rmmod nvme_keyring 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 552266 ']' 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 552266 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 552266 ']' 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 552266 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 552266 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 552266' 00:20:28.893 killing process with pid 552266 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 552266 00:20:28.893 [2024-07-25 09:35:01.548252] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:28.893 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 552266 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.151 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:31.679 00:20:31.679 real 0m17.457s 00:20:31.679 user 0m21.502s 00:20:31.679 sys 0m6.611s 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:31.679 ************************************ 00:20:31.679 END TEST nvmf_fips 00:20:31.679 ************************************ 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.679 09:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:33.580 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:33.580 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:33.580 Found net devices under 0000:82:00.0: cvl_0_0 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:33.580 Found net devices under 0000:82:00.1: cvl_0_1 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.580 ************************************ 00:20:33.580 START TEST nvmf_perf_adq 00:20:33.580 ************************************ 00:20:33.580 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:33.580 * Looking for test storage... 00:20:33.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:33.580 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.581 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:35.484 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:35.484 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:35.484 Found net devices under 0000:82:00.0: cvl_0_0 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:35.484 Found net devices under 0000:82:00.1: cvl_0_1 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:35.484 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:36.051 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:39.334 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.606 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:44.607 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:44.607 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:44.607 Found net devices under 0000:82:00.0: cvl_0_0 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:44.607 Found net devices under 0000:82:00.1: cvl_0_1 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.607 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:20:44.607 00:20:44.607 --- 10.0.0.2 ping statistics --- 00:20:44.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.607 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:20:44.607 00:20:44.607 --- 10.0.0.1 ping statistics --- 00:20:44.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.607 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=558422 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 558422 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 558422 ']' 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.607 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.607 [2024-07-25 09:35:17.118115] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:44.607 [2024-07-25 09:35:17.118199] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.607 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.607 [2024-07-25 09:35:17.192424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.607 [2024-07-25 09:35:17.315059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.607 [2024-07-25 09:35:17.315108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.607 [2024-07-25 09:35:17.315125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.607 [2024-07-25 09:35:17.315139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.607 [2024-07-25 09:35:17.315151] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.607 [2024-07-25 09:35:17.315232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.608 [2024-07-25 09:35:17.315287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.608 [2024-07-25 09:35:17.315312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.608 [2024-07-25 09:35:17.315315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.540 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.540 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.541 [2024-07-25 09:35:18.264125] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.541 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 Malloc1 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 [2024-07-25 09:35:18.315183] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=558580 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:45.799 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:45.799 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:47.698 "tick_rate": 2700000000, 00:20:47.698 "poll_groups": [ 00:20:47.698 { 00:20:47.698 "name": "nvmf_tgt_poll_group_000", 00:20:47.698 "admin_qpairs": 1, 00:20:47.698 "io_qpairs": 1, 00:20:47.698 "current_admin_qpairs": 1, 00:20:47.698 "current_io_qpairs": 1, 00:20:47.698 "pending_bdev_io": 0, 00:20:47.698 "completed_nvme_io": 19826, 00:20:47.698 "transports": [ 00:20:47.698 { 00:20:47.698 "trtype": "TCP" 00:20:47.698 } 00:20:47.698 ] 00:20:47.698 }, 00:20:47.698 { 00:20:47.698 "name": "nvmf_tgt_poll_group_001", 00:20:47.698 "admin_qpairs": 0, 00:20:47.698 "io_qpairs": 1, 00:20:47.698 "current_admin_qpairs": 0, 00:20:47.698 "current_io_qpairs": 1, 00:20:47.698 "pending_bdev_io": 0, 00:20:47.698 "completed_nvme_io": 20136, 00:20:47.698 "transports": [ 00:20:47.698 { 00:20:47.698 "trtype": "TCP" 00:20:47.698 } 00:20:47.698 ] 00:20:47.698 }, 00:20:47.698 { 00:20:47.698 "name": "nvmf_tgt_poll_group_002", 00:20:47.698 "admin_qpairs": 0, 00:20:47.698 "io_qpairs": 1, 00:20:47.698 "current_admin_qpairs": 0, 00:20:47.698 "current_io_qpairs": 1, 00:20:47.698 "pending_bdev_io": 0, 00:20:47.698 "completed_nvme_io": 20422, 00:20:47.698 "transports": [ 00:20:47.698 { 00:20:47.698 "trtype": "TCP" 00:20:47.698 } 00:20:47.698 ] 00:20:47.698 }, 00:20:47.698 { 00:20:47.698 "name": "nvmf_tgt_poll_group_003", 00:20:47.698 "admin_qpairs": 0, 00:20:47.698 "io_qpairs": 1, 00:20:47.698 "current_admin_qpairs": 0, 00:20:47.698 "current_io_qpairs": 1, 00:20:47.698 "pending_bdev_io": 0, 00:20:47.698 "completed_nvme_io": 19923, 00:20:47.698 "transports": [ 00:20:47.698 { 00:20:47.698 "trtype": "TCP" 00:20:47.698 } 00:20:47.698 ] 00:20:47.698 } 00:20:47.698 ] 00:20:47.698 }' 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:47.698 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 558580 00:20:55.810 Initializing NVMe Controllers 00:20:55.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.810 Initialization complete. Launching workers. 00:20:55.810 ======================================================== 00:20:55.810 Latency(us) 00:20:55.810 Device Information : IOPS MiB/s Average min max 00:20:55.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10296.50 40.22 6216.46 2409.58 10438.40 00:20:55.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10563.50 41.26 6059.56 2211.06 10267.03 00:20:55.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10625.10 41.50 6024.85 2125.63 9877.62 00:20:55.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10490.60 40.98 6101.64 2287.99 10406.98 00:20:55.810 ======================================================== 00:20:55.810 Total : 41975.69 163.97 6099.78 2125.63 10438.40 00:20:55.810 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.810 rmmod nvme_tcp 00:20:55.810 rmmod nvme_fabrics 00:20:55.810 rmmod nvme_keyring 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 558422 ']' 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 558422 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 558422 ']' 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 558422 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 558422 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 558422' 00:20:55.810 killing process with pid 558422 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 558422 00:20:55.810 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 558422 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.376 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.275 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:58.275 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:58.275 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:58.842 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:00.743 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:06.012 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.012 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:06.013 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:06.013 Found net devices under 0000:82:00.0: cvl_0_0 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:06.013 Found net devices under 0000:82:00.1: cvl_0_1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:06.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:21:06.013 00:21:06.013 --- 10.0.0.2 ping statistics --- 00:21:06.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.013 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:06.013 00:21:06.013 --- 10.0.0.1 ping statistics --- 00:21:06.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.013 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:06.013 net.core.busy_poll = 1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:06.013 net.core.busy_read = 1 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:06.013 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=561218 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 561218 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 561218 ']' 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.272 09:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.272 [2024-07-25 09:35:38.871391] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:06.273 [2024-07-25 09:35:38.871488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.273 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.273 [2024-07-25 09:35:38.935898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.531 [2024-07-25 09:35:39.048109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.531 [2024-07-25 09:35:39.048175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.531 [2024-07-25 09:35:39.048203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.531 [2024-07-25 09:35:39.048215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.531 [2024-07-25 09:35:39.048225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.531 [2024-07-25 09:35:39.048308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.531 [2024-07-25 09:35:39.048440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.531 [2024-07-25 09:35:39.048466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.531 [2024-07-25 09:35:39.048469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.531 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.531 [2024-07-25 09:35:39.262268] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.790 Malloc1 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.790 [2024-07-25 09:35:39.315519] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=561252 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:06.790 09:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:06.790 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:08.691 "tick_rate": 2700000000, 00:21:08.691 "poll_groups": [ 00:21:08.691 { 00:21:08.691 "name": "nvmf_tgt_poll_group_000", 00:21:08.691 "admin_qpairs": 1, 00:21:08.691 "io_qpairs": 3, 00:21:08.691 "current_admin_qpairs": 1, 00:21:08.691 "current_io_qpairs": 3, 00:21:08.691 "pending_bdev_io": 0, 00:21:08.691 "completed_nvme_io": 26914, 00:21:08.691 "transports": [ 00:21:08.691 { 00:21:08.691 "trtype": "TCP" 00:21:08.691 } 00:21:08.691 ] 00:21:08.691 }, 00:21:08.691 { 00:21:08.691 "name": "nvmf_tgt_poll_group_001", 00:21:08.691 "admin_qpairs": 0, 00:21:08.691 "io_qpairs": 1, 00:21:08.691 "current_admin_qpairs": 0, 00:21:08.691 "current_io_qpairs": 1, 00:21:08.691 "pending_bdev_io": 0, 00:21:08.691 "completed_nvme_io": 26285, 00:21:08.691 "transports": [ 00:21:08.691 { 00:21:08.691 "trtype": "TCP" 00:21:08.691 } 00:21:08.691 ] 00:21:08.691 }, 00:21:08.691 { 00:21:08.691 "name": "nvmf_tgt_poll_group_002", 00:21:08.691 "admin_qpairs": 0, 00:21:08.691 "io_qpairs": 0, 00:21:08.691 "current_admin_qpairs": 0, 00:21:08.691 "current_io_qpairs": 0, 00:21:08.691 "pending_bdev_io": 0, 00:21:08.691 "completed_nvme_io": 0, 00:21:08.691 "transports": [ 00:21:08.691 { 00:21:08.691 "trtype": "TCP" 00:21:08.691 } 00:21:08.691 ] 00:21:08.691 }, 00:21:08.691 { 00:21:08.691 "name": "nvmf_tgt_poll_group_003", 00:21:08.691 "admin_qpairs": 0, 00:21:08.691 "io_qpairs": 0, 00:21:08.691 "current_admin_qpairs": 0, 00:21:08.691 "current_io_qpairs": 0, 00:21:08.691 "pending_bdev_io": 0, 00:21:08.691 "completed_nvme_io": 0, 00:21:08.691 "transports": [ 00:21:08.691 { 00:21:08.691 "trtype": "TCP" 00:21:08.691 } 00:21:08.691 ] 00:21:08.691 } 00:21:08.691 ] 00:21:08.691 }' 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:08.691 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 561252 00:21:16.801 Initializing NVMe Controllers 00:21:16.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:16.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:16.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:16.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:16.802 Initialization complete. Launching workers. 00:21:16.802 ======================================================== 00:21:16.802 Latency(us) 00:21:16.802 Device Information : IOPS MiB/s Average min max 00:21:16.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4792.40 18.72 13410.07 1876.03 62480.39 00:21:16.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4773.80 18.65 13408.38 1301.02 62279.22 00:21:16.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4459.70 17.42 14351.88 2024.54 62269.98 00:21:16.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14037.90 54.84 4559.36 1875.36 6987.09 00:21:16.802 ======================================================== 00:21:16.802 Total : 28063.80 109.62 9132.20 1301.02 62480.39 00:21:16.802 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.802 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.802 rmmod nvme_tcp 00:21:16.802 rmmod nvme_fabrics 00:21:17.060 rmmod nvme_keyring 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 561218 ']' 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 561218 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 561218 ']' 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 561218 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 561218 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 561218' 00:21:17.060 killing process with pid 561218 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 561218 00:21:17.060 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 561218 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.318 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.222 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:19.222 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:19.222 00:21:19.222 real 0m45.953s 00:21:19.222 user 2m43.063s 00:21:19.222 sys 0m11.158s 00:21:19.222 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.222 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.222 ************************************ 00:21:19.222 END TEST nvmf_perf_adq 00:21:19.222 ************************************ 00:21:19.481 09:35:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:19.481 09:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:19.482 09:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.482 09:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.482 ************************************ 00:21:19.482 START TEST nvmf_shutdown 00:21:19.482 ************************************ 00:21:19.482 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:19.482 * Looking for test storage... 00:21:19.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:19.482 ************************************ 00:21:19.482 START TEST nvmf_shutdown_tc1 00:21:19.482 ************************************ 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.482 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:21.384 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:21.384 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:21.384 Found net devices under 0000:82:00.0: cvl_0_0 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:21.384 Found net devices under 0000:82:00.1: cvl_0_1 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.384 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.385 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.385 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.385 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:21:21.668 00:21:21.668 --- 10.0.0.2 ping statistics --- 00:21:21.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.668 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:21:21.668 00:21:21.668 --- 10.0.0.1 ping statistics --- 00:21:21.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.668 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.668 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=564411 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 564411 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 564411 ']' 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.669 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.669 [2024-07-25 09:35:54.230252] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:21.669 [2024-07-25 09:35:54.230330] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.669 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.669 [2024-07-25 09:35:54.292771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.983 [2024-07-25 09:35:54.412077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.983 [2024-07-25 09:35:54.412133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.983 [2024-07-25 09:35:54.412149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.984 [2024-07-25 09:35:54.412162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.984 [2024-07-25 09:35:54.412174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.984 [2024-07-25 09:35:54.412270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.984 [2024-07-25 09:35:54.412305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.984 [2024-07-25 09:35:54.412379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.984 [2024-07-25 09:35:54.412385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.984 [2024-07-25 09:35:54.575919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.984 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.984 Malloc1 00:21:21.984 [2024-07-25 09:35:54.665390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.984 Malloc2 00:21:22.257 Malloc3 00:21:22.257 Malloc4 00:21:22.257 Malloc5 00:21:22.257 Malloc6 00:21:22.257 Malloc7 00:21:22.519 Malloc8 00:21:22.519 Malloc9 00:21:22.519 Malloc10 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=564587 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 564587 /var/tmp/bdevperf.sock 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 564587 ']' 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.519 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.519 { 00:21:22.519 "params": { 00:21:22.519 "name": "Nvme$subsystem", 00:21:22.519 "trtype": "$TEST_TRANSPORT", 00:21:22.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.519 "adrfam": "ipv4", 00:21:22.519 "trsvcid": "$NVMF_PORT", 00:21:22.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.520 { 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme$subsystem", 00:21:22.520 "trtype": "$TEST_TRANSPORT", 00:21:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "$NVMF_PORT", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.520 "hdgst": ${hdgst:-false}, 00:21:22.520 "ddgst": ${ddgst:-false} 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 } 00:21:22.520 EOF 00:21:22.520 )") 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:22.520 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme1", 00:21:22.520 "trtype": "tcp", 00:21:22.520 "traddr": "10.0.0.2", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "4420", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.520 "hdgst": false, 00:21:22.520 "ddgst": false 00:21:22.520 }, 00:21:22.520 "method": "bdev_nvme_attach_controller" 00:21:22.520 },{ 00:21:22.520 "params": { 00:21:22.520 "name": "Nvme2", 00:21:22.520 "trtype": "tcp", 00:21:22.520 "traddr": "10.0.0.2", 00:21:22.520 "adrfam": "ipv4", 00:21:22.520 "trsvcid": "4420", 00:21:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.520 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:22.520 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme3", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme4", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme5", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme6", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme7", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme8", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme9", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 },{ 00:21:22.521 "params": { 00:21:22.521 "name": "Nvme10", 00:21:22.521 "trtype": "tcp", 00:21:22.521 "traddr": "10.0.0.2", 00:21:22.521 "adrfam": "ipv4", 00:21:22.521 "trsvcid": "4420", 00:21:22.521 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:22.521 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:22.521 "hdgst": false, 00:21:22.521 "ddgst": false 00:21:22.521 }, 00:21:22.521 "method": "bdev_nvme_attach_controller" 00:21:22.521 }' 00:21:22.521 [2024-07-25 09:35:55.193988] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:22.521 [2024-07-25 09:35:55.194065] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:22.521 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.779 [2024-07-25 09:35:55.257800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.779 [2024-07-25 09:35:55.368210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 564587 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:24.680 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:25.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 564587 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 564411 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.621 EOF 00:21:25.621 )") 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.621 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.621 { 00:21:25.621 "params": { 00:21:25.621 "name": "Nvme$subsystem", 00:21:25.621 "trtype": "$TEST_TRANSPORT", 00:21:25.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.621 "adrfam": "ipv4", 00:21:25.621 "trsvcid": "$NVMF_PORT", 00:21:25.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.621 "hdgst": ${hdgst:-false}, 00:21:25.621 "ddgst": ${ddgst:-false} 00:21:25.621 }, 00:21:25.621 "method": "bdev_nvme_attach_controller" 00:21:25.621 } 00:21:25.622 EOF 00:21:25.622 )") 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.622 { 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme$subsystem", 00:21:25.622 "trtype": "$TEST_TRANSPORT", 00:21:25.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "$NVMF_PORT", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.622 "hdgst": ${hdgst:-false}, 00:21:25.622 "ddgst": ${ddgst:-false} 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 } 00:21:25.622 EOF 00:21:25.622 )") 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.622 { 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme$subsystem", 00:21:25.622 "trtype": "$TEST_TRANSPORT", 00:21:25.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "$NVMF_PORT", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.622 "hdgst": ${hdgst:-false}, 00:21:25.622 "ddgst": ${ddgst:-false} 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 } 00:21:25.622 EOF 00:21:25.622 )") 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:25.622 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme1", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme2", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme3", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme4", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme5", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme6", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme7", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme8", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme9", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 },{ 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme10", 00:21:25.622 "trtype": "tcp", 00:21:25.622 "traddr": "10.0.0.2", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "4420", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:25.622 "hdgst": false, 00:21:25.622 "ddgst": false 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 }' 00:21:25.622 [2024-07-25 09:35:58.228582] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:25.622 [2024-07-25 09:35:58.228683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565010 ] 00:21:25.622 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.622 [2024-07-25 09:35:58.293870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.882 [2024-07-25 09:35:58.408848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.259 Running I/O for 1 seconds... 00:21:28.634 00:21:28.634 Latency(us) 00:21:28.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme1n1 : 1.10 232.41 14.53 0.00 0.00 272541.01 19126.80 257872.02 00:21:28.634 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme2n1 : 1.11 239.22 14.95 0.00 0.00 257624.63 9709.04 234570.33 00:21:28.634 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme3n1 : 1.09 239.27 14.95 0.00 0.00 253794.60 7864.32 242337.56 00:21:28.634 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme4n1 : 1.10 233.65 14.60 0.00 0.00 257507.56 18835.53 256318.58 00:21:28.634 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme5n1 : 1.14 223.80 13.99 0.00 0.00 264966.83 18835.53 265639.25 00:21:28.634 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme6n1 : 1.15 222.83 13.93 0.00 0.00 261598.25 20583.16 259425.47 00:21:28.634 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme7n1 : 1.13 225.95 14.12 0.00 0.00 253191.21 17961.72 257872.02 00:21:28.634 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme8n1 : 1.18 270.37 16.90 0.00 0.00 208965.18 13398.47 254765.13 00:21:28.634 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme9n1 : 1.16 221.26 13.83 0.00 0.00 250347.14 21262.79 270299.59 00:21:28.634 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.634 Verification LBA range: start 0x0 length 0x400 00:21:28.634 Nvme10n1 : 1.20 267.42 16.71 0.00 0.00 204494.54 6553.60 293601.28 00:21:28.634 =================================================================================================================== 00:21:28.634 Total : 2376.17 148.51 0.00 0.00 246561.70 6553.60 293601.28 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.634 rmmod nvme_tcp 00:21:28.634 rmmod nvme_fabrics 00:21:28.634 rmmod nvme_keyring 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 564411 ']' 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 564411 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 564411 ']' 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 564411 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 564411 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 564411' 00:21:28.634 killing process with pid 564411 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 564411 00:21:28.634 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 564411 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.199 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.735 00:21:31.735 real 0m11.856s 00:21:31.735 user 0m34.446s 00:21:31.735 sys 0m3.238s 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:31.735 ************************************ 00:21:31.735 END TEST nvmf_shutdown_tc1 00:21:31.735 ************************************ 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:31.735 ************************************ 00:21:31.735 START TEST nvmf_shutdown_tc2 00:21:31.735 ************************************ 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:31.735 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:31.736 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:31.736 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:31.736 Found net devices under 0000:82:00.0: cvl_0_0 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:31.736 Found net devices under 0000:82:00.1: cvl_0_1 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.736 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.737 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.737 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.737 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.737 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.737 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:21:31.737 00:21:31.737 --- 10.0.0.2 ping statistics --- 00:21:31.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.737 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:21:31.737 00:21:31.737 --- 10.0.0.1 ping statistics --- 00:21:31.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.737 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=565894 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 565894 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 565894 ']' 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.737 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.737 [2024-07-25 09:36:04.194243] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:31.737 [2024-07-25 09:36:04.194316] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.737 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.737 [2024-07-25 09:36:04.260650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.737 [2024-07-25 09:36:04.369464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.737 [2024-07-25 09:36:04.369519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.737 [2024-07-25 09:36:04.369549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.737 [2024-07-25 09:36:04.369561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.737 [2024-07-25 09:36:04.369571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.737 [2024-07-25 09:36:04.369631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.737 [2024-07-25 09:36:04.369680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.737 [2024-07-25 09:36:04.369731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:31.737 [2024-07-25 09:36:04.369733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.997 [2024-07-25 09:36:04.513842] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.997 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.997 Malloc1 00:21:31.997 [2024-07-25 09:36:04.589646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.997 Malloc2 00:21:31.997 Malloc3 00:21:31.997 Malloc4 00:21:32.255 Malloc5 00:21:32.255 Malloc6 00:21:32.255 Malloc7 00:21:32.255 Malloc8 00:21:32.255 Malloc9 00:21:32.514 Malloc10 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=566073 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 566073 /var/tmp/bdevperf.sock 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 566073 ']' 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:32.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.514 { 00:21:32.514 "params": { 00:21:32.514 "name": "Nvme$subsystem", 00:21:32.514 "trtype": "$TEST_TRANSPORT", 00:21:32.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.514 "adrfam": "ipv4", 00:21:32.514 "trsvcid": "$NVMF_PORT", 00:21:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.514 "hdgst": ${hdgst:-false}, 00:21:32.514 "ddgst": ${ddgst:-false} 00:21:32.514 }, 00:21:32.514 "method": "bdev_nvme_attach_controller" 00:21:32.514 } 00:21:32.514 EOF 00:21:32.514 )") 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.514 { 00:21:32.514 "params": { 00:21:32.514 "name": "Nvme$subsystem", 00:21:32.514 "trtype": "$TEST_TRANSPORT", 00:21:32.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.514 "adrfam": "ipv4", 00:21:32.514 "trsvcid": "$NVMF_PORT", 00:21:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.514 "hdgst": ${hdgst:-false}, 00:21:32.514 "ddgst": ${ddgst:-false} 00:21:32.514 }, 00:21:32.514 "method": "bdev_nvme_attach_controller" 00:21:32.514 } 00:21:32.514 EOF 00:21:32.514 )") 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.514 { 00:21:32.514 "params": { 00:21:32.514 "name": "Nvme$subsystem", 00:21:32.514 "trtype": "$TEST_TRANSPORT", 00:21:32.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.514 "adrfam": "ipv4", 00:21:32.514 "trsvcid": "$NVMF_PORT", 00:21:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.514 "hdgst": ${hdgst:-false}, 00:21:32.514 "ddgst": ${ddgst:-false} 00:21:32.514 }, 00:21:32.514 "method": "bdev_nvme_attach_controller" 00:21:32.514 } 00:21:32.514 EOF 00:21:32.514 )") 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.514 { 00:21:32.514 "params": { 00:21:32.514 "name": "Nvme$subsystem", 00:21:32.514 "trtype": "$TEST_TRANSPORT", 00:21:32.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.514 "adrfam": "ipv4", 00:21:32.514 "trsvcid": "$NVMF_PORT", 00:21:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.514 "hdgst": ${hdgst:-false}, 00:21:32.514 "ddgst": ${ddgst:-false} 00:21:32.514 }, 00:21:32.514 "method": "bdev_nvme_attach_controller" 00:21:32.514 } 00:21:32.514 EOF 00:21:32.514 )") 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.514 { 00:21:32.514 "params": { 00:21:32.514 "name": "Nvme$subsystem", 00:21:32.514 "trtype": "$TEST_TRANSPORT", 00:21:32.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.514 "adrfam": "ipv4", 00:21:32.514 "trsvcid": "$NVMF_PORT", 00:21:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.514 "hdgst": ${hdgst:-false}, 00:21:32.514 "ddgst": ${ddgst:-false} 00:21:32.514 }, 00:21:32.514 "method": "bdev_nvme_attach_controller" 00:21:32.514 } 00:21:32.514 EOF 00:21:32.514 )") 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.514 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.514 { 00:21:32.514 "params": { 00:21:32.514 "name": "Nvme$subsystem", 00:21:32.515 "trtype": "$TEST_TRANSPORT", 00:21:32.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "$NVMF_PORT", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.515 "hdgst": ${hdgst:-false}, 00:21:32.515 "ddgst": ${ddgst:-false} 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 } 00:21:32.515 EOF 00:21:32.515 )") 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.515 { 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme$subsystem", 00:21:32.515 "trtype": "$TEST_TRANSPORT", 00:21:32.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "$NVMF_PORT", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.515 "hdgst": ${hdgst:-false}, 00:21:32.515 "ddgst": ${ddgst:-false} 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 } 00:21:32.515 EOF 00:21:32.515 )") 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.515 { 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme$subsystem", 00:21:32.515 "trtype": "$TEST_TRANSPORT", 00:21:32.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "$NVMF_PORT", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.515 "hdgst": ${hdgst:-false}, 00:21:32.515 "ddgst": ${ddgst:-false} 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 } 00:21:32.515 EOF 00:21:32.515 )") 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.515 { 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme$subsystem", 00:21:32.515 "trtype": "$TEST_TRANSPORT", 00:21:32.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "$NVMF_PORT", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.515 "hdgst": ${hdgst:-false}, 00:21:32.515 "ddgst": ${ddgst:-false} 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 } 00:21:32.515 EOF 00:21:32.515 )") 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.515 { 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme$subsystem", 00:21:32.515 "trtype": "$TEST_TRANSPORT", 00:21:32.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "$NVMF_PORT", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.515 "hdgst": ${hdgst:-false}, 00:21:32.515 "ddgst": ${ddgst:-false} 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 } 00:21:32.515 EOF 00:21:32.515 )") 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:32.515 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme1", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme2", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme3", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme4", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme5", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme6", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme7", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme8", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme9", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 },{ 00:21:32.515 "params": { 00:21:32.515 "name": "Nvme10", 00:21:32.515 "trtype": "tcp", 00:21:32.515 "traddr": "10.0.0.2", 00:21:32.515 "adrfam": "ipv4", 00:21:32.515 "trsvcid": "4420", 00:21:32.515 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:32.515 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:32.515 "hdgst": false, 00:21:32.515 "ddgst": false 00:21:32.515 }, 00:21:32.515 "method": "bdev_nvme_attach_controller" 00:21:32.515 }' 00:21:32.515 [2024-07-25 09:36:05.114925] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:32.516 [2024-07-25 09:36:05.115001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566073 ] 00:21:32.516 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.516 [2024-07-25 09:36:05.178922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.775 [2024-07-25 09:36:05.289302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.149 Running I/O for 10 seconds... 00:21:34.407 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.407 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:34.407 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:34.407 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.407 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:34.664 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 566073 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 566073 ']' 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 566073 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 566073 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 566073' 00:21:34.923 killing process with pid 566073 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 566073 00:21:34.923 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 566073 00:21:34.923 Received shutdown signal, test time was about 0.746660 seconds 00:21:34.923 00:21:34.923 Latency(us) 00:21:34.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.923 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme1n1 : 0.73 264.27 16.52 0.00 0.00 238509.57 33204.91 242337.56 00:21:34.923 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme2n1 : 0.70 183.98 11.50 0.00 0.00 333883.54 24563.86 257872.02 00:21:34.923 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme3n1 : 0.74 266.64 16.67 0.00 0.00 223505.62 4805.97 259425.47 00:21:34.923 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme4n1 : 0.73 262.46 16.40 0.00 0.00 222441.62 30486.38 245444.46 00:21:34.923 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme5n1 : 0.75 257.42 16.09 0.00 0.00 221312.70 22039.51 257872.02 00:21:34.923 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme6n1 : 0.74 260.15 16.26 0.00 0.00 212771.52 18738.44 237677.23 00:21:34.923 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.923 Verification LBA range: start 0x0 length 0x400 00:21:34.923 Nvme7n1 : 0.74 258.55 16.16 0.00 0.00 208542.28 22816.24 253211.69 00:21:34.923 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.924 Verification LBA range: start 0x0 length 0x400 00:21:34.924 Nvme8n1 : 0.70 182.42 11.40 0.00 0.00 283570.25 18252.99 242337.56 00:21:34.924 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.924 Verification LBA range: start 0x0 length 0x400 00:21:34.924 Nvme9n1 : 0.71 179.62 11.23 0.00 0.00 280101.17 26408.58 268746.15 00:21:34.924 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.924 Verification LBA range: start 0x0 length 0x400 00:21:34.924 Nvme10n1 : 0.72 178.59 11.16 0.00 0.00 273328.73 19515.16 288940.94 00:21:34.924 =================================================================================================================== 00:21:34.924 Total : 2294.12 143.38 0.00 0.00 243145.76 4805.97 288940.94 00:21:35.181 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 565894 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.555 rmmod nvme_tcp 00:21:36.555 rmmod nvme_fabrics 00:21:36.555 rmmod nvme_keyring 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 565894 ']' 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 565894 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 565894 ']' 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 565894 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 565894 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 565894' 00:21:36.555 killing process with pid 565894 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 565894 00:21:36.555 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 565894 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.813 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.346 00:21:39.346 real 0m7.567s 00:21:39.346 user 0m22.748s 00:21:39.346 sys 0m1.350s 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:39.346 ************************************ 00:21:39.346 END TEST nvmf_shutdown_tc2 00:21:39.346 ************************************ 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:39.346 ************************************ 00:21:39.346 START TEST nvmf_shutdown_tc3 00:21:39.346 ************************************ 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.346 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:39.347 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:39.347 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:39.347 Found net devices under 0000:82:00.0: cvl_0_0 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:39.347 Found net devices under 0000:82:00.1: cvl_0_1 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:39.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:21:39.347 00:21:39.347 --- 10.0.0.2 ping statistics --- 00:21:39.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.347 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:39.347 00:21:39.347 --- 10.0.0.1 ping statistics --- 00:21:39.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.347 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:39.347 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=567485 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 567485 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 567485 ']' 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.348 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 [2024-07-25 09:36:11.834263] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:39.348 [2024-07-25 09:36:11.834367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.348 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.348 [2024-07-25 09:36:11.902982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.348 [2024-07-25 09:36:12.019890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.348 [2024-07-25 09:36:12.019954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.348 [2024-07-25 09:36:12.019971] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.348 [2024-07-25 09:36:12.019984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.348 [2024-07-25 09:36:12.019996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.348 [2024-07-25 09:36:12.020105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.348 [2024-07-25 09:36:12.020199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.348 [2024-07-25 09:36:12.020265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.348 [2024-07-25 09:36:12.020267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.284 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.285 [2024-07-25 09:36:12.794969] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.285 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.285 Malloc1 00:21:40.285 [2024-07-25 09:36:12.870146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.285 Malloc2 00:21:40.285 Malloc3 00:21:40.285 Malloc4 00:21:40.544 Malloc5 00:21:40.544 Malloc6 00:21:40.544 Malloc7 00:21:40.544 Malloc8 00:21:40.544 Malloc9 00:21:40.805 Malloc10 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=567675 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 567675 /var/tmp/bdevperf.sock 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 567675 ']' 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.805 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.806 { 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme$subsystem", 00:21:40.806 "trtype": "$TEST_TRANSPORT", 00:21:40.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "$NVMF_PORT", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.806 "hdgst": ${hdgst:-false}, 00:21:40.806 "ddgst": ${ddgst:-false} 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 } 00:21:40.806 EOF 00:21:40.806 )") 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:40.806 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme1", 00:21:40.806 "trtype": "tcp", 00:21:40.806 "traddr": "10.0.0.2", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "4420", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.806 "hdgst": false, 00:21:40.806 "ddgst": false 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 },{ 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme2", 00:21:40.806 "trtype": "tcp", 00:21:40.806 "traddr": "10.0.0.2", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "4420", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:40.806 "hdgst": false, 00:21:40.806 "ddgst": false 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 },{ 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme3", 00:21:40.806 "trtype": "tcp", 00:21:40.806 "traddr": "10.0.0.2", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "4420", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:40.806 "hdgst": false, 00:21:40.806 "ddgst": false 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 },{ 00:21:40.806 "params": { 00:21:40.806 "name": "Nvme4", 00:21:40.806 "trtype": "tcp", 00:21:40.806 "traddr": "10.0.0.2", 00:21:40.806 "adrfam": "ipv4", 00:21:40.806 "trsvcid": "4420", 00:21:40.806 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:40.806 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:40.806 "hdgst": false, 00:21:40.806 "ddgst": false 00:21:40.806 }, 00:21:40.806 "method": "bdev_nvme_attach_controller" 00:21:40.806 },{ 00:21:40.806 "params": { 00:21:40.807 "name": "Nvme5", 00:21:40.807 "trtype": "tcp", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "adrfam": "ipv4", 00:21:40.807 "trsvcid": "4420", 00:21:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:40.807 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:40.807 "hdgst": false, 00:21:40.807 "ddgst": false 00:21:40.807 }, 00:21:40.807 "method": "bdev_nvme_attach_controller" 00:21:40.807 },{ 00:21:40.807 "params": { 00:21:40.807 "name": "Nvme6", 00:21:40.807 "trtype": "tcp", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "adrfam": "ipv4", 00:21:40.807 "trsvcid": "4420", 00:21:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:40.807 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:40.807 "hdgst": false, 00:21:40.807 "ddgst": false 00:21:40.807 }, 00:21:40.807 "method": "bdev_nvme_attach_controller" 00:21:40.807 },{ 00:21:40.807 "params": { 00:21:40.807 "name": "Nvme7", 00:21:40.807 "trtype": "tcp", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "adrfam": "ipv4", 00:21:40.807 "trsvcid": "4420", 00:21:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:40.807 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:40.807 "hdgst": false, 00:21:40.807 "ddgst": false 00:21:40.807 }, 00:21:40.807 "method": "bdev_nvme_attach_controller" 00:21:40.807 },{ 00:21:40.807 "params": { 00:21:40.807 "name": "Nvme8", 00:21:40.807 "trtype": "tcp", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "adrfam": "ipv4", 00:21:40.807 "trsvcid": "4420", 00:21:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:40.807 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:40.807 "hdgst": false, 00:21:40.807 "ddgst": false 00:21:40.807 }, 00:21:40.807 "method": "bdev_nvme_attach_controller" 00:21:40.807 },{ 00:21:40.807 "params": { 00:21:40.807 "name": "Nvme9", 00:21:40.807 "trtype": "tcp", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "adrfam": "ipv4", 00:21:40.807 "trsvcid": "4420", 00:21:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:40.807 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:40.807 "hdgst": false, 00:21:40.807 "ddgst": false 00:21:40.807 }, 00:21:40.807 "method": "bdev_nvme_attach_controller" 00:21:40.807 },{ 00:21:40.807 "params": { 00:21:40.807 "name": "Nvme10", 00:21:40.807 "trtype": "tcp", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "adrfam": "ipv4", 00:21:40.807 "trsvcid": "4420", 00:21:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:40.807 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:40.807 "hdgst": false, 00:21:40.807 "ddgst": false 00:21:40.807 }, 00:21:40.807 "method": "bdev_nvme_attach_controller" 00:21:40.807 }' 00:21:40.807 [2024-07-25 09:36:13.374042] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:40.807 [2024-07-25 09:36:13.374117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567675 ] 00:21:40.807 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.807 [2024-07-25 09:36:13.436758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.066 [2024-07-25 09:36:13.547775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.968 Running I/O for 10 seconds... 00:21:42.968 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.968 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:42.968 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:42.968 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.968 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:43.228 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:43.487 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:21:43.761 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 567485 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 567485 ']' 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 567485 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 567485 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 567485' 00:21:43.762 killing process with pid 567485 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 567485 00:21:43.762 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 567485 00:21:43.762 [2024-07-25 09:36:16.380996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50920 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.381102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50920 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.381135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50920 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.382997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.383107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e53420 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.762 [2024-07-25 09:36:16.384529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.384996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.385145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50de0 is same with the state(5) to be set 00:21:43.763 [2024-07-25 09:36:16.387460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.763 [2024-07-25 09:36:16.387946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.763 [2024-07-25 09:36:16.387960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.387975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.387991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(5) to be set 00:21:43.764 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1the state(5) to be set 00:21:43.764 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-07-25 09:36:16.388196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1the state(5) to be set 00:21:43.764 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-07-25 09:36:16.388408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 09:36:16.388421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with [2024-07-25 09:36:16.388527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:21:43.764 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51780 is same with the state(5) to be set 00:21:43.764 [2024-07-25 09:36:16.388560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.764 [2024-07-25 09:36:16.388589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.764 [2024-07-25 09:36:16.388603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.388978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.388992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 09:36:16.389303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with [2024-07-25 09:36:16.389321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128the state(5) to be set 00:21:43.765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 [2024-07-25 09:36:16.389441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.765 [2024-07-25 09:36:16.389454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 09:36:16.389466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.765 the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:43.765 [2024-07-25 09:36:16.389529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.765 [2024-07-25 09:36:16.389554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with [2024-07-25 09:36:16.389589] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b0de70 was disconnected and frthe state(5) to be set 00:21:43.766 eed. reset controller. 00:21:43.766 [2024-07-25 09:36:16.389605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.389992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51c40 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c80 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a3b50 is same with the state(5) to be set 00:21:43.766 [2024-07-25 09:36:16.390517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.766 [2024-07-25 09:36:16.390557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.766 [2024-07-25 09:36:16.390571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aff00 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.390699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a23a0 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.390857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.390958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.390971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e360 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.391034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.391061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.391093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.767 [2024-07-25 09:36:16.391121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f830 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.767 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1the state(5) to be set 00:21:43.767 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.767 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.767 [2024-07-25 09:36:16.391603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.767 [2024-07-25 09:36:16.391616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.767 [2024-07-25 09:36:16.391624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(5) to be set 00:21:43.768 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.768 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 09:36:16.391766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1the state(5) to be set 00:21:43.768 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:21:43.768 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.391951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:43.768 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.391979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.391992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with [2024-07-25 09:36:16.392055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(5) to be set 00:21:43.768 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52100 is same with the state(5) to be set 00:21:43.768 [2024-07-25 09:36:16.392180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.768 [2024-07-25 09:36:16.392223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.768 [2024-07-25 09:36:16.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.392979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.392994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.393008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.393023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.393036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.393056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.393070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.393090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.393104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.393119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.769 [2024-07-25 09:36:16.393132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.769 [2024-07-25 09:36:16.393206] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8e090 was disconnected and freed. reset controller. 00:21:43.769 [2024-07-25 09:36:16.393584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.769 [2024-07-25 09:36:16.393859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.393993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e525c0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.394911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:43.770 [2024-07-25 09:36:16.394947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e360 (9): Bad file descriptor 00:21:43.770 [2024-07-25 09:36:16.395534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.770 [2024-07-25 09:36:16.395904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.395989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52aa0 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.771 [2024-07-25 09:36:16.396845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197f830 (9): Bad file descriptor 00:21:43.771 [2024-07-25 09:36:16.396964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.396988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.771 [2024-07-25 09:36:16.397583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52f60 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.397834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.772 [2024-07-25 09:36:16.397865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e360 with addr=10.0.0.2, port=4420 00:21:43.772 [2024-07-25 09:36:16.397881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e360 is same with the state(5) to be set 00:21:43.772 [2024-07-25 09:36:16.398244] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.772 [2024-07-25 09:36:16.398332] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.772 [2024-07-25 09:36:16.398738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.398984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.398999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.772 [2024-07-25 09:36:16.399653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.772 [2024-07-25 09:36:16.399668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.399982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.399995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.400612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.773 [2024-07-25 09:36:16.400630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.401143] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27cc1e0 was disconnected and freed. reset controller. 00:21:43.773 [2024-07-25 09:36:16.401334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.773 [2024-07-25 09:36:16.401376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197f830 with addr=10.0.0.2, port=4420 00:21:43.773 [2024-07-25 09:36:16.401394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f830 is same with the state(5) to be set 00:21:43.773 [2024-07-25 09:36:16.401413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e360 (9): Bad file descriptor 00:21:43.773 [2024-07-25 09:36:16.401468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.773 [2024-07-25 09:36:16.401495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.401510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.773 [2024-07-25 09:36:16.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.401537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.773 [2024-07-25 09:36:16.401551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.773 [2024-07-25 09:36:16.401564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.773 [2024-07-25 09:36:16.401577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bc170 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.401649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b464a0 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.401800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c80 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.401827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a3b50 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.401855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aff00 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.401908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.401970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.401996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.402010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.402023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.402035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481610 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.402089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.402108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.402123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.402136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.402149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.402162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.402186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.774 [2024-07-25 09:36:16.402198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.402211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19af4a0 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.402238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a23a0 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.402390] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.774 [2024-07-25 09:36:16.402469] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.774 [2024-07-25 09:36:16.403814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:43.774 [2024-07-25 09:36:16.403856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197f830 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.403877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:43.774 [2024-07-25 09:36:16.403891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:43.774 [2024-07-25 09:36:16.403906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:43.774 [2024-07-25 09:36:16.404029] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.774 [2024-07-25 09:36:16.404194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.774 [2024-07-25 09:36:16.404306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.774 [2024-07-25 09:36:16.404332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c80 with addr=10.0.0.2, port=4420 00:21:43.774 [2024-07-25 09:36:16.404348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c80 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.404370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.774 [2024-07-25 09:36:16.404384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:43.774 [2024-07-25 09:36:16.404397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.774 [2024-07-25 09:36:16.404500] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.774 [2024-07-25 09:36:16.404851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.774 [2024-07-25 09:36:16.404878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c80 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.404999] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:43.774 [2024-07-25 09:36:16.405035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:43.774 [2024-07-25 09:36:16.405059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:43.774 [2024-07-25 09:36:16.405072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:43.774 [2024-07-25 09:36:16.405155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.774 [2024-07-25 09:36:16.406987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:43.774 [2024-07-25 09:36:16.407215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.774 [2024-07-25 09:36:16.407242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e360 with addr=10.0.0.2, port=4420 00:21:43.774 [2024-07-25 09:36:16.407258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e360 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.407315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e360 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.407385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:43.774 [2024-07-25 09:36:16.407412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:43.774 [2024-07-25 09:36:16.407425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:43.774 [2024-07-25 09:36:16.407481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.774 [2024-07-25 09:36:16.408044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.774 [2024-07-25 09:36:16.408326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.774 [2024-07-25 09:36:16.408352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197f830 with addr=10.0.0.2, port=4420 00:21:43.774 [2024-07-25 09:36:16.408378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f830 is same with the state(5) to be set 00:21:43.774 [2024-07-25 09:36:16.408435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197f830 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.408491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.774 [2024-07-25 09:36:16.408508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:43.774 [2024-07-25 09:36:16.408528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.774 [2024-07-25 09:36:16.408590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.774 [2024-07-25 09:36:16.411200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc170 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.411239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b464a0 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.411289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1481610 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.411322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19af4a0 (9): Bad file descriptor 00:21:43.774 [2024-07-25 09:36:16.411490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.774 [2024-07-25 09:36:16.411515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.411544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.774 [2024-07-25 09:36:16.411560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.411577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.774 [2024-07-25 09:36:16.411591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.411606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.774 [2024-07-25 09:36:16.411620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.774 [2024-07-25 09:36:16.411635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.411978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.411993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.775 [2024-07-25 09:36:16.412651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.775 [2024-07-25 09:36:16.412664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.412972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.412985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.413397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.413412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c980 is same with the state(5) to be set 00:21:43.776 [2024-07-25 09:36:16.414677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.414972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.414987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.415001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.415016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.415030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.415045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.415059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.776 [2024-07-25 09:36:16.415075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.776 [2024-07-25 09:36:16.415088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.415909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.415929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f3f0 is same with the state(5) to be set 00:21:43.777 [2024-07-25 09:36:16.417061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.777 [2024-07-25 09:36:16.417426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.777 [2024-07-25 09:36:16.417439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.417979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.417997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-25 09:36:16.418630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.778 [2024-07-25 09:36:16.418644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.418992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.419006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.419020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979f20 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.420272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:43.779 [2024-07-25 09:36:16.420301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:43.779 [2024-07-25 09:36:16.420319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:43.779 [2024-07-25 09:36:16.420730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.779 [2024-07-25 09:36:16.420760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a3b50 with addr=10.0.0.2, port=4420 00:21:43.779 [2024-07-25 09:36:16.420777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a3b50 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.420902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.779 [2024-07-25 09:36:16.420930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a23a0 with addr=10.0.0.2, port=4420 00:21:43.779 [2024-07-25 09:36:16.420946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a23a0 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.421090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.779 [2024-07-25 09:36:16.421114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19aff00 with addr=10.0.0.2, port=4420 00:21:43.779 [2024-07-25 09:36:16.421129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aff00 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.422013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:43.779 [2024-07-25 09:36:16.422039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:43.779 [2024-07-25 09:36:16.422055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.779 [2024-07-25 09:36:16.422099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a3b50 (9): Bad file descriptor 00:21:43.779 [2024-07-25 09:36:16.422122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a23a0 (9): Bad file descriptor 00:21:43.779 [2024-07-25 09:36:16.422140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aff00 (9): Bad file descriptor 00:21:43.779 [2024-07-25 09:36:16.422443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.779 [2024-07-25 09:36:16.422470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c80 with addr=10.0.0.2, port=4420 00:21:43.779 [2024-07-25 09:36:16.422486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c80 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.422593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.779 [2024-07-25 09:36:16.422617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e360 with addr=10.0.0.2, port=4420 00:21:43.779 [2024-07-25 09:36:16.422632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e360 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.422713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.779 [2024-07-25 09:36:16.422737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197f830 with addr=10.0.0.2, port=4420 00:21:43.779 [2024-07-25 09:36:16.422752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f830 is same with the state(5) to be set 00:21:43.779 [2024-07-25 09:36:16.422767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:43.779 [2024-07-25 09:36:16.422779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:43.779 [2024-07-25 09:36:16.422795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:43.779 [2024-07-25 09:36:16.422815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:43.779 [2024-07-25 09:36:16.422828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:43.779 [2024-07-25 09:36:16.422841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:43.779 [2024-07-25 09:36:16.422860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:43.779 [2024-07-25 09:36:16.422874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:43.779 [2024-07-25 09:36:16.422886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:43.779 [2024-07-25 09:36:16.422991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-25 09:36:16.423205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.779 [2024-07-25 09:36:16.423221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.423982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.423995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-07-25 09:36:16.424327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-25 09:36:16.424350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.424920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.424935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b410 is same with the state(5) to be set 00:21:43.781 [2024-07-25 09:36:16.426197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.781 [2024-07-25 09:36:16.426801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.781 [2024-07-25 09:36:16.426816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.426829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.426844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.426858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.426873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.426886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.426901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.426918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.426934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.426948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.426964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.426977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.426992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-07-25 09:36:16.427965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.782 [2024-07-25 09:36:16.427978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.427993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.428007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.428026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.428040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.428056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.428069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.428085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.428098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.428112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22acb80 is same with the state(5) to be set 00:21:43.783 [2024-07-25 09:36:16.429380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.429976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.429991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.783 [2024-07-25 09:36:16.430422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.783 [2024-07-25 09:36:16.430436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.430976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.430990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.431005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.431018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.431033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.431046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.431062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.431075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.431090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.431103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.431118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.431131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.431148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24545f0 is same with the state(5) to be set 00:21:43.784 [2024-07-25 09:36:16.432410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.784 [2024-07-25 09:36:16.432681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.784 [2024-07-25 09:36:16.432696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.432972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.432985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.785 [2024-07-25 09:36:16.433860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.785 [2024-07-25 09:36:16.433873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.433888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.433902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.433920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.433934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.433949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.433962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.433978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.786 [2024-07-25 09:36:16.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.786 [2024-07-25 09:36:16.434323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26245c0 is same with the state(5) to be set 00:21:43.786 [2024-07-25 09:36:16.435928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.786 [2024-07-25 09:36:16.435954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.786 [2024-07-25 09:36:16.435967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.786 [2024-07-25 09:36:16.435983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:43.786 [2024-07-25 09:36:16.436004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:43.786 [2024-07-25 09:36:16.436064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c80 (9): Bad file descriptor 00:21:43.786 [2024-07-25 09:36:16.436089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e360 (9): Bad file descriptor 00:21:43.786 [2024-07-25 09:36:16.436108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197f830 (9): Bad file descriptor 00:21:43.786 [2024-07-25 09:36:16.436183] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.786 [2024-07-25 09:36:16.436208] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.786 [2024-07-25 09:36:16.436231] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.786 [2024-07-25 09:36:16.436248] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.786 [2024-07-25 09:36:16.436267] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.786 [2024-07-25 09:36:16.436353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:43.786 task offset: 25088 on job bdev=Nvme3n1 fails 00:21:43.786 00:21:43.786 Latency(us) 00:21:43.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.786 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme1n1 ended in about 0.86 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme1n1 : 0.86 222.84 13.93 74.28 0.00 212822.28 8252.68 246997.90 00:21:43.786 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme2n1 ended in about 0.88 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme2n1 : 0.88 145.45 9.09 72.72 0.00 283907.92 19320.98 253211.69 00:21:43.786 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme3n1 ended in about 0.86 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme3n1 : 0.86 223.26 13.95 74.42 0.00 203229.68 21262.79 234570.33 00:21:43.786 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme4n1 ended in about 0.88 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme4n1 : 0.88 171.12 10.69 46.46 0.00 270023.49 17670.45 268746.15 00:21:43.786 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme5n1 ended in about 0.89 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme5n1 : 0.89 144.53 9.03 72.26 0.00 267449.58 26214.40 259425.47 00:21:43.786 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme6n1 ended in about 0.89 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme6n1 : 0.89 143.57 8.97 71.79 0.00 263491.38 18835.53 264085.81 00:21:43.786 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme7n1 ended in about 0.89 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme7n1 : 0.89 143.06 8.94 71.53 0.00 258643.69 32039.82 246997.90 00:21:43.786 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme8n1 ended in about 0.90 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme8n1 : 0.90 148.15 9.26 65.72 0.00 252925.98 19029.71 223696.21 00:21:43.786 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme9n1 ended in about 0.90 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme9n1 : 0.90 142.08 8.88 71.04 0.00 249025.17 20388.98 273406.48 00:21:43.786 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.786 Job: Nvme10n1 ended in about 0.87 seconds with error 00:21:43.786 Verification LBA range: start 0x0 length 0x400 00:21:43.786 Nvme10n1 : 0.87 147.24 9.20 73.62 0.00 232463.61 5267.15 287387.50 00:21:43.786 =================================================================================================================== 00:21:43.786 Total : 1631.30 101.96 693.85 0.00 246812.51 5267.15 287387.50 00:21:43.786 [2024-07-25 09:36:16.461679] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:43.786 [2024-07-25 09:36:16.461755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:43.786 [2024-07-25 09:36:16.462088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.786 [2024-07-25 09:36:16.462123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19af4a0 with addr=10.0.0.2, port=4420 00:21:43.786 [2024-07-25 09:36:16.462142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19af4a0 is same with the state(5) to be set 00:21:43.786 [2024-07-25 09:36:16.462256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.786 [2024-07-25 09:36:16.462282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1481610 with addr=10.0.0.2, port=4420 00:21:43.786 [2024-07-25 09:36:16.462298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481610 is same with the state(5) to be set 00:21:43.786 [2024-07-25 09:36:16.462313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:43.786 [2024-07-25 09:36:16.462326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:43.786 [2024-07-25 09:36:16.462341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:43.786 [2024-07-25 09:36:16.462372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:43.786 [2024-07-25 09:36:16.462389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.462403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:43.787 [2024-07-25 09:36:16.462420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.462434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.462459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.787 [2024-07-25 09:36:16.463611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:43.787 [2024-07-25 09:36:16.463639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:43.787 [2024-07-25 09:36:16.463656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:43.787 [2024-07-25 09:36:16.463673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.463687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.463698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.463912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.463940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bc170 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.463956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bc170 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.464101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.464126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b464a0 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.464142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b464a0 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.464178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19af4a0 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.464199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1481610 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.464284] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.787 [2024-07-25 09:36:16.464320] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.787 [2024-07-25 09:36:16.464627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.464655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19aff00 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.464672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aff00 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.464791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.464816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a23a0 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.464831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a23a0 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.464995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.465020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a3b50 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.465035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a3b50 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.465054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc170 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.465072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b464a0 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.465087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.465100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.465112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:43.787 [2024-07-25 09:36:16.465135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.465158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.465170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:43.787 [2024-07-25 09:36:16.465261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.787 [2024-07-25 09:36:16.465285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:43.787 [2024-07-25 09:36:16.465301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:43.787 [2024-07-25 09:36:16.465316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.465329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.465372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aff00 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.465395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a23a0 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.465413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a3b50 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.465428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.465440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.465453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:43.787 [2024-07-25 09:36:16.465470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.465483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.465496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:43.787 [2024-07-25 09:36:16.465531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.465548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.465712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.465737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197f830 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.465752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f830 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.465827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.465851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e360 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.465866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e360 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.465982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.787 [2024-07-25 09:36:16.466007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c80 with addr=10.0.0.2, port=4420 00:21:43.787 [2024-07-25 09:36:16.466022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c80 is same with the state(5) to be set 00:21:43.787 [2024-07-25 09:36:16.466036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.466048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.466065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:43.787 [2024-07-25 09:36:16.466083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.466097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.466110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:43.787 [2024-07-25 09:36:16.466125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.466137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.466150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:43.787 [2024-07-25 09:36:16.466193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.466210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.466221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.466237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197f830 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.466255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e360 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.466272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c80 (9): Bad file descriptor 00:21:43.787 [2024-07-25 09:36:16.466318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.466335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.466348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.787 [2024-07-25 09:36:16.466372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.466387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.466400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:43.787 [2024-07-25 09:36:16.466415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:43.787 [2024-07-25 09:36:16.466428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:43.787 [2024-07-25 09:36:16.466440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:43.787 [2024-07-25 09:36:16.466476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.466493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.787 [2024-07-25 09:36:16.466504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:44.355 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:44.355 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 567675 00:21:45.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (567675) - No such process 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.290 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.290 rmmod nvme_tcp 00:21:45.290 rmmod nvme_fabrics 00:21:45.290 rmmod nvme_keyring 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.550 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.457 00:21:47.457 real 0m8.473s 00:21:47.457 user 0m22.659s 00:21:47.457 sys 0m1.523s 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 ************************************ 00:21:47.457 END TEST nvmf_shutdown_tc3 00:21:47.457 ************************************ 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:47.457 00:21:47.457 real 0m28.101s 00:21:47.457 user 1m19.947s 00:21:47.457 sys 0m6.237s 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 ************************************ 00:21:47.457 END TEST nvmf_shutdown 00:21:47.457 ************************************ 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:47.457 00:21:47.457 real 10m38.542s 00:21:47.457 user 25m36.204s 00:21:47.457 sys 2m32.685s 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.457 09:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 ************************************ 00:21:47.457 END TEST nvmf_target_extra 00:21:47.457 ************************************ 00:21:47.457 09:36:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:47.457 09:36:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.457 09:36:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.457 09:36:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 ************************************ 00:21:47.457 START TEST nvmf_host 00:21:47.457 ************************************ 00:21:47.457 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:47.716 * Looking for test storage... 00:21:47.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.717 ************************************ 00:21:47.717 START TEST nvmf_multicontroller 00:21:47.717 ************************************ 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:47.717 * Looking for test storage... 00:21:47.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:47.717 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.718 09:36:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:49.622 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:49.622 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:49.622 Found net devices under 0000:82:00.0: cvl_0_0 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:49.622 Found net devices under 0000:82:00.1: cvl_0_1 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.622 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:21:49.623 00:21:49.623 --- 10.0.0.2 ping statistics --- 00:21:49.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.623 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:21:49.623 00:21:49.623 --- 10.0.0.1 ping statistics --- 00:21:49.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.623 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=570228 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 570228 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 570228 ']' 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.623 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:49.881 [2024-07-25 09:36:22.370958] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:49.881 [2024-07-25 09:36:22.371038] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.881 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.881 [2024-07-25 09:36:22.436819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:49.881 [2024-07-25 09:36:22.552937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.881 [2024-07-25 09:36:22.552990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.881 [2024-07-25 09:36:22.553007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.881 [2024-07-25 09:36:22.553030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.881 [2024-07-25 09:36:22.553042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.881 [2024-07-25 09:36:22.553133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.881 [2024-07-25 09:36:22.553249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.881 [2024-07-25 09:36:22.553252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 [2024-07-25 09:36:22.693427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 Malloc0 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 [2024-07-25 09:36:22.756244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 [2024-07-25 09:36:22.764146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.140 Malloc1 00:21:50.140 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=570366 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 570366 /var/tmp/bdevperf.sock 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 570366 ']' 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.141 09:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.707 NVMe0n1 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.707 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.708 1 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.708 request: 00:21:50.708 { 00:21:50.708 "name": "NVMe0", 00:21:50.708 "trtype": "tcp", 00:21:50.708 "traddr": "10.0.0.2", 00:21:50.708 "adrfam": "ipv4", 00:21:50.708 "trsvcid": "4420", 00:21:50.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.708 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:50.708 "hostaddr": "10.0.0.2", 00:21:50.708 "hostsvcid": "60000", 00:21:50.708 "prchk_reftag": false, 00:21:50.708 "prchk_guard": false, 00:21:50.708 "hdgst": false, 00:21:50.708 "ddgst": false, 00:21:50.708 "method": "bdev_nvme_attach_controller", 00:21:50.708 "req_id": 1 00:21:50.708 } 00:21:50.708 Got JSON-RPC error response 00:21:50.708 response: 00:21:50.708 { 00:21:50.708 "code": -114, 00:21:50.708 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:50.708 } 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.708 request: 00:21:50.708 { 00:21:50.708 "name": "NVMe0", 00:21:50.708 "trtype": "tcp", 00:21:50.708 "traddr": "10.0.0.2", 00:21:50.708 "adrfam": "ipv4", 00:21:50.708 "trsvcid": "4420", 00:21:50.708 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:50.708 "hostaddr": "10.0.0.2", 00:21:50.708 "hostsvcid": "60000", 00:21:50.708 "prchk_reftag": false, 00:21:50.708 "prchk_guard": false, 00:21:50.708 "hdgst": false, 00:21:50.708 "ddgst": false, 00:21:50.708 "method": "bdev_nvme_attach_controller", 00:21:50.708 "req_id": 1 00:21:50.708 } 00:21:50.708 Got JSON-RPC error response 00:21:50.708 response: 00:21:50.708 { 00:21:50.708 "code": -114, 00:21:50.708 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:50.708 } 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.708 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.708 request: 00:21:50.708 { 00:21:50.708 "name": "NVMe0", 00:21:50.708 "trtype": "tcp", 00:21:50.708 "traddr": "10.0.0.2", 00:21:50.708 "adrfam": "ipv4", 00:21:50.708 "trsvcid": "4420", 00:21:50.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.708 "hostaddr": "10.0.0.2", 00:21:50.708 "hostsvcid": "60000", 00:21:50.708 "prchk_reftag": false, 00:21:50.708 "prchk_guard": false, 00:21:50.708 "hdgst": false, 00:21:50.708 "ddgst": false, 00:21:50.708 "multipath": "disable", 00:21:50.708 "method": "bdev_nvme_attach_controller", 00:21:50.708 "req_id": 1 00:21:50.708 } 00:21:50.709 Got JSON-RPC error response 00:21:50.709 response: 00:21:50.709 { 00:21:50.709 "code": -114, 00:21:50.709 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:50.709 } 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.709 request: 00:21:50.709 { 00:21:50.709 "name": "NVMe0", 00:21:50.709 "trtype": "tcp", 00:21:50.709 "traddr": "10.0.0.2", 00:21:50.709 "adrfam": "ipv4", 00:21:50.709 "trsvcid": "4420", 00:21:50.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.709 "hostaddr": "10.0.0.2", 00:21:50.709 "hostsvcid": "60000", 00:21:50.709 "prchk_reftag": false, 00:21:50.709 "prchk_guard": false, 00:21:50.709 "hdgst": false, 00:21:50.709 "ddgst": false, 00:21:50.709 "multipath": "failover", 00:21:50.709 "method": "bdev_nvme_attach_controller", 00:21:50.709 "req_id": 1 00:21:50.709 } 00:21:50.709 Got JSON-RPC error response 00:21:50.709 response: 00:21:50.709 { 00:21:50.709 "code": -114, 00:21:50.709 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:50.709 } 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.709 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.968 00:21:50.968 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.968 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.968 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.968 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:50.968 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.968 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:50.969 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.969 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.227 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:51.227 09:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.605 0 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 570366 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 570366 ']' 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 570366 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 570366 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 570366' 00:21:52.605 killing process with pid 570366 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 570366 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 570366 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:21:52.605 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:52.605 [2024-07-25 09:36:22.869638] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:52.605 [2024-07-25 09:36:22.869734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570366 ] 00:21:52.605 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.605 [2024-07-25 09:36:22.930210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.605 [2024-07-25 09:36:23.038614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.605 [2024-07-25 09:36:23.857294] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 185f5628-a0f1-4c5c-bd3d-8ad532d6236b already exists 00:21:52.605 [2024-07-25 09:36:23.857333] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:185f5628-a0f1-4c5c-bd3d-8ad532d6236b alias for bdev NVMe1n1 00:21:52.605 [2024-07-25 09:36:23.857371] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:52.605 Running I/O for 1 seconds... 00:21:52.605 00:21:52.605 Latency(us) 00:21:52.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.605 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:52.605 NVMe0n1 : 1.01 19306.26 75.42 0.00 0.00 6620.66 3932.16 13786.83 00:21:52.605 =================================================================================================================== 00:21:52.605 Total : 19306.26 75.42 0.00 0.00 6620.66 3932.16 13786.83 00:21:52.605 Received shutdown signal, test time was about 1.000000 seconds 00:21:52.605 00:21:52.605 Latency(us) 00:21:52.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.605 =================================================================================================================== 00:21:52.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.605 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.605 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.605 rmmod nvme_tcp 00:21:52.865 rmmod nvme_fabrics 00:21:52.865 rmmod nvme_keyring 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 570228 ']' 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 570228 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 570228 ']' 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 570228 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 570228 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 570228' 00:21:52.865 killing process with pid 570228 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 570228 00:21:52.865 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 570228 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.124 09:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.661 00:21:55.661 real 0m7.538s 00:21:55.661 user 0m12.577s 00:21:55.661 sys 0m2.203s 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.661 ************************************ 00:21:55.661 END TEST nvmf_multicontroller 00:21:55.661 ************************************ 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.661 ************************************ 00:21:55.661 START TEST nvmf_aer 00:21:55.661 ************************************ 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:55.661 * Looking for test storage... 00:21:55.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:55.661 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.662 09:36:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:57.567 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:57.567 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:57.567 Found net devices under 0000:82:00.0: cvl_0_0 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.567 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:57.568 Found net devices under 0000:82:00.1: cvl_0_1 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:21:57.568 00:21:57.568 --- 10.0.0.2 ping statistics --- 00:21:57.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.568 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:57.568 00:21:57.568 --- 10.0.0.1 ping statistics --- 00:21:57.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.568 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=572582 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 572582 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 572582 ']' 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.568 09:36:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.568 [2024-07-25 09:36:30.014549] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:57.568 [2024-07-25 09:36:30.014654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.568 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.568 [2024-07-25 09:36:30.083395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.568 [2024-07-25 09:36:30.192999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.568 [2024-07-25 09:36:30.193051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.568 [2024-07-25 09:36:30.193078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.568 [2024-07-25 09:36:30.193090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.568 [2024-07-25 09:36:30.193099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.568 [2024-07-25 09:36:30.193180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.568 [2024-07-25 09:36:30.193243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.568 [2024-07-25 09:36:30.193309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.568 [2024-07-25 09:36:30.193312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 [2024-07-25 09:36:30.332536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 Malloc0 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 [2024-07-25 09:36:30.383724] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.826 [ 00:21:57.826 { 00:21:57.826 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:57.826 "subtype": "Discovery", 00:21:57.826 "listen_addresses": [], 00:21:57.826 "allow_any_host": true, 00:21:57.826 "hosts": [] 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.826 "subtype": "NVMe", 00:21:57.826 "listen_addresses": [ 00:21:57.826 { 00:21:57.826 "trtype": "TCP", 00:21:57.826 "adrfam": "IPv4", 00:21:57.826 "traddr": "10.0.0.2", 00:21:57.826 "trsvcid": "4420" 00:21:57.826 } 00:21:57.826 ], 00:21:57.826 "allow_any_host": true, 00:21:57.826 "hosts": [], 00:21:57.826 "serial_number": "SPDK00000000000001", 00:21:57.826 "model_number": "SPDK bdev Controller", 00:21:57.826 "max_namespaces": 2, 00:21:57.826 "min_cntlid": 1, 00:21:57.826 "max_cntlid": 65519, 00:21:57.826 "namespaces": [ 00:21:57.826 { 00:21:57.826 "nsid": 1, 00:21:57.826 "bdev_name": "Malloc0", 00:21:57.826 "name": "Malloc0", 00:21:57.826 "nguid": "6F31E4D11F6441D088AD7A0AE1DD48FE", 00:21:57.826 "uuid": "6f31e4d1-1f64-41d0-88ad-7a0ae1dd48fe" 00:21:57.826 } 00:21:57.826 ] 00:21:57.826 } 00:21:57.826 ] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=572615 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:21:57.826 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:21:57.826 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 Malloc1 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 [ 00:21:58.094 { 00:21:58.094 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.094 "subtype": "Discovery", 00:21:58.094 "listen_addresses": [], 00:21:58.094 "allow_any_host": true, 00:21:58.094 "hosts": [] 00:21:58.094 }, 00:21:58.094 { 00:21:58.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.094 "subtype": "NVMe", 00:21:58.094 "listen_addresses": [ 00:21:58.094 { 00:21:58.094 "trtype": "TCP", 00:21:58.094 "adrfam": "IPv4", 00:21:58.094 "traddr": "10.0.0.2", 00:21:58.094 "trsvcid": "4420" 00:21:58.094 } 00:21:58.094 ], 00:21:58.094 "allow_any_host": true, 00:21:58.094 "hosts": [], 00:21:58.094 "serial_number": "SPDK00000000000001", 00:21:58.094 "model_number": "SPDK bdev Controller", 00:21:58.094 "max_namespaces": 2, 00:21:58.094 "min_cntlid": 1, 00:21:58.094 "max_cntlid": 65519, 00:21:58.094 "namespaces": [ 00:21:58.094 { 00:21:58.094 "nsid": 1, 00:21:58.094 "bdev_name": "Malloc0", 00:21:58.094 "name": "Malloc0", 00:21:58.094 "nguid": "6F31E4D11F6441D088AD7A0AE1DD48FE", 00:21:58.094 "uuid": "6f31e4d1-1f64-41d0-88ad-7a0ae1dd48fe" 00:21:58.094 }, 00:21:58.094 { 00:21:58.094 "nsid": 2, 00:21:58.094 "bdev_name": "Malloc1", 00:21:58.094 "name": "Malloc1", 00:21:58.094 "nguid": "AD2F15F1F94441D8A06A90B3D98F5EB2", 00:21:58.094 "uuid": "ad2f15f1-f944-41d8-a06a-90b3d98f5eb2" 00:21:58.094 } 00:21:58.094 ] 00:21:58.094 } 00:21:58.094 ] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 572615 00:21:58.094 Asynchronous Event Request test 00:21:58.094 Attaching to 10.0.0.2 00:21:58.094 Attached to 10.0.0.2 00:21:58.094 Registering asynchronous event callbacks... 00:21:58.094 Starting namespace attribute notice tests for all controllers... 00:21:58.094 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:58.094 aer_cb - Changed Namespace 00:21:58.094 Cleaning up... 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.094 rmmod nvme_tcp 00:21:58.094 rmmod nvme_fabrics 00:21:58.094 rmmod nvme_keyring 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:58.094 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 572582 ']' 00:21:58.095 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 572582 00:21:58.095 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 572582 ']' 00:21:58.095 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 572582 00:21:58.095 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:58.095 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.095 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 572582 00:21:58.397 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:58.397 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:58.397 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 572582' 00:21:58.397 killing process with pid 572582 00:21:58.397 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 572582 00:21:58.397 09:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 572582 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.667 09:36:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.570 00:22:00.570 real 0m5.319s 00:22:00.570 user 0m4.176s 00:22:00.570 sys 0m1.852s 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.570 ************************************ 00:22:00.570 END TEST nvmf_aer 00:22:00.570 ************************************ 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.570 ************************************ 00:22:00.570 START TEST nvmf_async_init 00:22:00.570 ************************************ 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:00.570 * Looking for test storage... 00:22:00.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.570 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3d6bca65a4534ec3b01a44172f35fdd9 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.571 09:36:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.475 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.475 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.475 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:02.476 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:02.476 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:02.476 Found net devices under 0000:82:00.0: cvl_0_0 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.476 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:02.734 Found net devices under 0000:82:00.1: cvl_0_1 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.734 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:22:02.735 00:22:02.735 --- 10.0.0.2 ping statistics --- 00:22:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.735 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:02.735 00:22:02.735 --- 10.0.0.1 ping statistics --- 00:22:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.735 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=574670 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 574670 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 574670 ']' 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.735 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.735 [2024-07-25 09:36:35.405096] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:02.735 [2024-07-25 09:36:35.405165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.735 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.993 [2024-07-25 09:36:35.482555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.993 [2024-07-25 09:36:35.611410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.993 [2024-07-25 09:36:35.611474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.993 [2024-07-25 09:36:35.611515] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.993 [2024-07-25 09:36:35.611538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.993 [2024-07-25 09:36:35.611557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.993 [2024-07-25 09:36:35.611596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.252 [2024-07-25 09:36:35.757926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.252 null0 00:22:03.252 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3d6bca65a4534ec3b01a44172f35fdd9 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.253 [2024-07-25 09:36:35.798150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.253 09:36:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.513 nvme0n1 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.513 [ 00:22:03.513 { 00:22:03.513 "name": "nvme0n1", 00:22:03.513 "aliases": [ 00:22:03.513 "3d6bca65-a453-4ec3-b01a-44172f35fdd9" 00:22:03.513 ], 00:22:03.513 "product_name": "NVMe disk", 00:22:03.513 "block_size": 512, 00:22:03.513 "num_blocks": 2097152, 00:22:03.513 "uuid": "3d6bca65-a453-4ec3-b01a-44172f35fdd9", 00:22:03.513 "assigned_rate_limits": { 00:22:03.513 "rw_ios_per_sec": 0, 00:22:03.513 "rw_mbytes_per_sec": 0, 00:22:03.513 "r_mbytes_per_sec": 0, 00:22:03.513 "w_mbytes_per_sec": 0 00:22:03.513 }, 00:22:03.513 "claimed": false, 00:22:03.513 "zoned": false, 00:22:03.513 "supported_io_types": { 00:22:03.513 "read": true, 00:22:03.513 "write": true, 00:22:03.513 "unmap": false, 00:22:03.513 "flush": true, 00:22:03.513 "reset": true, 00:22:03.513 "nvme_admin": true, 00:22:03.513 "nvme_io": true, 00:22:03.513 "nvme_io_md": false, 00:22:03.513 "write_zeroes": true, 00:22:03.513 "zcopy": false, 00:22:03.513 "get_zone_info": false, 00:22:03.513 "zone_management": false, 00:22:03.513 "zone_append": false, 00:22:03.513 "compare": true, 00:22:03.513 "compare_and_write": true, 00:22:03.513 "abort": true, 00:22:03.513 "seek_hole": false, 00:22:03.513 "seek_data": false, 00:22:03.513 "copy": true, 00:22:03.513 "nvme_iov_md": false 00:22:03.513 }, 00:22:03.513 "memory_domains": [ 00:22:03.513 { 00:22:03.513 "dma_device_id": "system", 00:22:03.513 "dma_device_type": 1 00:22:03.513 } 00:22:03.513 ], 00:22:03.513 "driver_specific": { 00:22:03.513 "nvme": [ 00:22:03.513 { 00:22:03.513 "trid": { 00:22:03.513 "trtype": "TCP", 00:22:03.513 "adrfam": "IPv4", 00:22:03.513 "traddr": "10.0.0.2", 00:22:03.513 "trsvcid": "4420", 00:22:03.513 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:03.513 }, 00:22:03.513 "ctrlr_data": { 00:22:03.513 "cntlid": 1, 00:22:03.513 "vendor_id": "0x8086", 00:22:03.513 "model_number": "SPDK bdev Controller", 00:22:03.513 "serial_number": "00000000000000000000", 00:22:03.513 "firmware_revision": "24.09", 00:22:03.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.513 "oacs": { 00:22:03.513 "security": 0, 00:22:03.513 "format": 0, 00:22:03.513 "firmware": 0, 00:22:03.513 "ns_manage": 0 00:22:03.513 }, 00:22:03.513 "multi_ctrlr": true, 00:22:03.513 "ana_reporting": false 00:22:03.513 }, 00:22:03.513 "vs": { 00:22:03.513 "nvme_version": "1.3" 00:22:03.513 }, 00:22:03.513 "ns_data": { 00:22:03.513 "id": 1, 00:22:03.513 "can_share": true 00:22:03.513 } 00:22:03.513 } 00:22:03.513 ], 00:22:03.513 "mp_policy": "active_passive" 00:22:03.513 } 00:22:03.513 } 00:22:03.513 ] 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.513 [2024-07-25 09:36:36.046812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:03.513 [2024-07-25 09:36:36.046899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf861d0 (9): Bad file descriptor 00:22:03.513 [2024-07-25 09:36:36.179476] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.513 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.513 [ 00:22:03.513 { 00:22:03.513 "name": "nvme0n1", 00:22:03.513 "aliases": [ 00:22:03.513 "3d6bca65-a453-4ec3-b01a-44172f35fdd9" 00:22:03.513 ], 00:22:03.513 "product_name": "NVMe disk", 00:22:03.513 "block_size": 512, 00:22:03.513 "num_blocks": 2097152, 00:22:03.513 "uuid": "3d6bca65-a453-4ec3-b01a-44172f35fdd9", 00:22:03.513 "assigned_rate_limits": { 00:22:03.513 "rw_ios_per_sec": 0, 00:22:03.513 "rw_mbytes_per_sec": 0, 00:22:03.513 "r_mbytes_per_sec": 0, 00:22:03.513 "w_mbytes_per_sec": 0 00:22:03.513 }, 00:22:03.513 "claimed": false, 00:22:03.513 "zoned": false, 00:22:03.513 "supported_io_types": { 00:22:03.513 "read": true, 00:22:03.513 "write": true, 00:22:03.513 "unmap": false, 00:22:03.514 "flush": true, 00:22:03.514 "reset": true, 00:22:03.514 "nvme_admin": true, 00:22:03.514 "nvme_io": true, 00:22:03.514 "nvme_io_md": false, 00:22:03.514 "write_zeroes": true, 00:22:03.514 "zcopy": false, 00:22:03.514 "get_zone_info": false, 00:22:03.514 "zone_management": false, 00:22:03.514 "zone_append": false, 00:22:03.514 "compare": true, 00:22:03.514 "compare_and_write": true, 00:22:03.514 "abort": true, 00:22:03.514 "seek_hole": false, 00:22:03.514 "seek_data": false, 00:22:03.514 "copy": true, 00:22:03.514 "nvme_iov_md": false 00:22:03.514 }, 00:22:03.514 "memory_domains": [ 00:22:03.514 { 00:22:03.514 "dma_device_id": "system", 00:22:03.514 "dma_device_type": 1 00:22:03.514 } 00:22:03.514 ], 00:22:03.514 "driver_specific": { 00:22:03.514 "nvme": [ 00:22:03.514 { 00:22:03.514 "trid": { 00:22:03.514 "trtype": "TCP", 00:22:03.514 "adrfam": "IPv4", 00:22:03.514 "traddr": "10.0.0.2", 00:22:03.514 "trsvcid": "4420", 00:22:03.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:03.514 }, 00:22:03.514 "ctrlr_data": { 00:22:03.514 "cntlid": 2, 00:22:03.514 "vendor_id": "0x8086", 00:22:03.514 "model_number": "SPDK bdev Controller", 00:22:03.514 "serial_number": "00000000000000000000", 00:22:03.514 "firmware_revision": "24.09", 00:22:03.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.514 "oacs": { 00:22:03.514 "security": 0, 00:22:03.514 "format": 0, 00:22:03.514 "firmware": 0, 00:22:03.514 "ns_manage": 0 00:22:03.514 }, 00:22:03.514 "multi_ctrlr": true, 00:22:03.514 "ana_reporting": false 00:22:03.514 }, 00:22:03.514 "vs": { 00:22:03.514 "nvme_version": "1.3" 00:22:03.514 }, 00:22:03.514 "ns_data": { 00:22:03.514 "id": 1, 00:22:03.514 "can_share": true 00:22:03.514 } 00:22:03.514 } 00:22:03.514 ], 00:22:03.514 "mp_policy": "active_passive" 00:22:03.514 } 00:22:03.514 } 00:22:03.514 ] 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vMG4dLVGVL 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vMG4dLVGVL 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 [2024-07-25 09:36:36.227394] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.514 [2024-07-25 09:36:36.227508] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vMG4dLVGVL 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 [2024-07-25 09:36:36.235434] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vMG4dLVGVL 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 [2024-07-25 09:36:36.243448] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.514 [2024-07-25 09:36:36.243502] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.774 nvme0n1 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.774 [ 00:22:03.774 { 00:22:03.774 "name": "nvme0n1", 00:22:03.774 "aliases": [ 00:22:03.774 "3d6bca65-a453-4ec3-b01a-44172f35fdd9" 00:22:03.774 ], 00:22:03.774 "product_name": "NVMe disk", 00:22:03.774 "block_size": 512, 00:22:03.774 "num_blocks": 2097152, 00:22:03.774 "uuid": "3d6bca65-a453-4ec3-b01a-44172f35fdd9", 00:22:03.774 "assigned_rate_limits": { 00:22:03.774 "rw_ios_per_sec": 0, 00:22:03.774 "rw_mbytes_per_sec": 0, 00:22:03.774 "r_mbytes_per_sec": 0, 00:22:03.774 "w_mbytes_per_sec": 0 00:22:03.774 }, 00:22:03.774 "claimed": false, 00:22:03.774 "zoned": false, 00:22:03.774 "supported_io_types": { 00:22:03.774 "read": true, 00:22:03.774 "write": true, 00:22:03.774 "unmap": false, 00:22:03.774 "flush": true, 00:22:03.774 "reset": true, 00:22:03.774 "nvme_admin": true, 00:22:03.774 "nvme_io": true, 00:22:03.774 "nvme_io_md": false, 00:22:03.774 "write_zeroes": true, 00:22:03.774 "zcopy": false, 00:22:03.774 "get_zone_info": false, 00:22:03.774 "zone_management": false, 00:22:03.774 "zone_append": false, 00:22:03.774 "compare": true, 00:22:03.774 "compare_and_write": true, 00:22:03.774 "abort": true, 00:22:03.774 "seek_hole": false, 00:22:03.774 "seek_data": false, 00:22:03.774 "copy": true, 00:22:03.774 "nvme_iov_md": false 00:22:03.774 }, 00:22:03.774 "memory_domains": [ 00:22:03.774 { 00:22:03.774 "dma_device_id": "system", 00:22:03.774 "dma_device_type": 1 00:22:03.774 } 00:22:03.774 ], 00:22:03.774 "driver_specific": { 00:22:03.774 "nvme": [ 00:22:03.774 { 00:22:03.774 "trid": { 00:22:03.774 "trtype": "TCP", 00:22:03.774 "adrfam": "IPv4", 00:22:03.774 "traddr": "10.0.0.2", 00:22:03.774 "trsvcid": "4421", 00:22:03.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:03.774 }, 00:22:03.774 "ctrlr_data": { 00:22:03.774 "cntlid": 3, 00:22:03.774 "vendor_id": "0x8086", 00:22:03.774 "model_number": "SPDK bdev Controller", 00:22:03.774 "serial_number": "00000000000000000000", 00:22:03.774 "firmware_revision": "24.09", 00:22:03.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.774 "oacs": { 00:22:03.774 "security": 0, 00:22:03.774 "format": 0, 00:22:03.774 "firmware": 0, 00:22:03.774 "ns_manage": 0 00:22:03.774 }, 00:22:03.774 "multi_ctrlr": true, 00:22:03.774 "ana_reporting": false 00:22:03.774 }, 00:22:03.774 "vs": { 00:22:03.774 "nvme_version": "1.3" 00:22:03.774 }, 00:22:03.774 "ns_data": { 00:22:03.774 "id": 1, 00:22:03.774 "can_share": true 00:22:03.774 } 00:22:03.774 } 00:22:03.774 ], 00:22:03.774 "mp_policy": "active_passive" 00:22:03.774 } 00:22:03.774 } 00:22:03.774 ] 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.vMG4dLVGVL 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.774 rmmod nvme_tcp 00:22:03.774 rmmod nvme_fabrics 00:22:03.774 rmmod nvme_keyring 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 574670 ']' 00:22:03.774 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 574670 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 574670 ']' 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 574670 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 574670 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 574670' 00:22:03.775 killing process with pid 574670 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 574670 00:22:03.775 [2024-07-25 09:36:36.427250] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.775 [2024-07-25 09:36:36.427287] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:03.775 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 574670 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.033 09:36:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.570 00:22:06.570 real 0m5.499s 00:22:06.570 user 0m2.173s 00:22:06.570 sys 0m1.802s 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.570 ************************************ 00:22:06.570 END TEST nvmf_async_init 00:22:06.570 ************************************ 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.570 ************************************ 00:22:06.570 START TEST dma 00:22:06.570 ************************************ 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:06.570 * Looking for test storage... 00:22:06.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.570 09:36:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:06.571 00:22:06.571 real 0m0.065s 00:22:06.571 user 0m0.028s 00:22:06.571 sys 0m0.043s 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:06.571 ************************************ 00:22:06.571 END TEST dma 00:22:06.571 ************************************ 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.571 ************************************ 00:22:06.571 START TEST nvmf_identify 00:22:06.571 ************************************ 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:06.571 * Looking for test storage... 00:22:06.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.571 09:36:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:08.478 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.478 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:08.479 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:08.479 Found net devices under 0000:82:00.0: cvl_0_0 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:08.479 Found net devices under 0000:82:00.1: cvl_0_1 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:22:08.479 00:22:08.479 --- 10.0.0.2 ping statistics --- 00:22:08.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.479 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:08.479 00:22:08.479 --- 10.0.0.1 ping statistics --- 00:22:08.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.479 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=576789 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 576789 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 576789 ']' 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.479 09:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.479 [2024-07-25 09:36:40.962563] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:08.479 [2024-07-25 09:36:40.962640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.479 [2024-07-25 09:36:41.030934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.479 [2024-07-25 09:36:41.148085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.479 [2024-07-25 09:36:41.148148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.479 [2024-07-25 09:36:41.148165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.479 [2024-07-25 09:36:41.148179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.479 [2024-07-25 09:36:41.148191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.479 [2024-07-25 09:36:41.148281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.479 [2024-07-25 09:36:41.148349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.479 [2024-07-25 09:36:41.148444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.479 [2024-07-25 09:36:41.148450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 [2024-07-25 09:36:41.942052] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 Malloc0 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 [2024-07-25 09:36:42.014050] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.414 [ 00:22:09.414 { 00:22:09.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:09.414 "subtype": "Discovery", 00:22:09.414 "listen_addresses": [ 00:22:09.414 { 00:22:09.414 "trtype": "TCP", 00:22:09.414 "adrfam": "IPv4", 00:22:09.414 "traddr": "10.0.0.2", 00:22:09.414 "trsvcid": "4420" 00:22:09.414 } 00:22:09.414 ], 00:22:09.414 "allow_any_host": true, 00:22:09.414 "hosts": [] 00:22:09.414 }, 00:22:09.414 { 00:22:09.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.414 "subtype": "NVMe", 00:22:09.414 "listen_addresses": [ 00:22:09.414 { 00:22:09.414 "trtype": "TCP", 00:22:09.414 "adrfam": "IPv4", 00:22:09.414 "traddr": "10.0.0.2", 00:22:09.414 "trsvcid": "4420" 00:22:09.414 } 00:22:09.414 ], 00:22:09.414 "allow_any_host": true, 00:22:09.414 "hosts": [], 00:22:09.414 "serial_number": "SPDK00000000000001", 00:22:09.414 "model_number": "SPDK bdev Controller", 00:22:09.414 "max_namespaces": 32, 00:22:09.414 "min_cntlid": 1, 00:22:09.414 "max_cntlid": 65519, 00:22:09.414 "namespaces": [ 00:22:09.414 { 00:22:09.414 "nsid": 1, 00:22:09.414 "bdev_name": "Malloc0", 00:22:09.414 "name": "Malloc0", 00:22:09.414 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:09.414 "eui64": "ABCDEF0123456789", 00:22:09.414 "uuid": "f607c6fa-c941-4b98-aa99-deebd407ef33" 00:22:09.414 } 00:22:09.414 ] 00:22:09.414 } 00:22:09.414 ] 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.414 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:09.414 [2024-07-25 09:36:42.052738] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:09.414 [2024-07-25 09:36:42.052780] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576944 ] 00:22:09.414 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.414 [2024-07-25 09:36:42.086832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:09.414 [2024-07-25 09:36:42.086893] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.414 [2024-07-25 09:36:42.086903] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.414 [2024-07-25 09:36:42.086917] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.414 [2024-07-25 09:36:42.086930] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.414 [2024-07-25 09:36:42.087267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:09.414 [2024-07-25 09:36:42.087324] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x98b540 0 00:22:09.414 [2024-07-25 09:36:42.101369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.414 [2024-07-25 09:36:42.101394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.414 [2024-07-25 09:36:42.101407] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.414 [2024-07-25 09:36:42.101414] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.414 [2024-07-25 09:36:42.101473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.414 [2024-07-25 09:36:42.101485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.414 [2024-07-25 09:36:42.101492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.414 [2024-07-25 09:36:42.101509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.415 [2024-07-25 09:36:42.101535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.109384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.109402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.109409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.109431] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.415 [2024-07-25 09:36:42.109452] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:09.415 [2024-07-25 09:36:42.109462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:09.415 [2024-07-25 09:36:42.109483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.109509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.109532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.109677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.109689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.109695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.109714] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:09.415 [2024-07-25 09:36:42.109728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:09.415 [2024-07-25 09:36:42.109740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.109763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.109783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.109887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.109900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.109906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.109921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:09.415 [2024-07-25 09:36:42.109939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.415 [2024-07-25 09:36:42.109951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.109964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.109974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.109994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.110079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.110090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.110097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.110111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.415 [2024-07-25 09:36:42.110127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.110151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.110170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.110246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.110259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.110265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.110279] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:09.415 [2024-07-25 09:36:42.110288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:09.415 [2024-07-25 09:36:42.110300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.415 [2024-07-25 09:36:42.110410] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:09.415 [2024-07-25 09:36:42.110421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.415 [2024-07-25 09:36:42.110435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.110459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.110481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.110621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.110635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.110642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.110661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.415 [2024-07-25 09:36:42.110693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.110718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.110739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.110819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.110832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.110839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.110853] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.415 [2024-07-25 09:36:42.110861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:09.415 [2024-07-25 09:36:42.110874] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:09.415 [2024-07-25 09:36:42.110888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.415 [2024-07-25 09:36:42.110903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.110925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.415 [2024-07-25 09:36:42.110936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.415 [2024-07-25 09:36:42.110957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.415 [2024-07-25 09:36:42.111074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.415 [2024-07-25 09:36:42.111093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.415 [2024-07-25 09:36:42.111099] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.111105] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98b540): datao=0, datal=4096, cccid=0 00:22:09.415 [2024-07-25 09:36:42.111112] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eb3c0) on tqpair(0x98b540): expected_datao=0, payload_size=4096 00:22:09.415 [2024-07-25 09:36:42.111119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.111129] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.111137] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.111149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.415 [2024-07-25 09:36:42.111158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.415 [2024-07-25 09:36:42.111164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.415 [2024-07-25 09:36:42.111170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.415 [2024-07-25 09:36:42.111181] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:09.415 [2024-07-25 09:36:42.111189] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:09.415 [2024-07-25 09:36:42.111200] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:09.416 [2024-07-25 09:36:42.111209] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:09.416 [2024-07-25 09:36:42.111216] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:09.416 [2024-07-25 09:36:42.111224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:09.416 [2024-07-25 09:36:42.111240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.416 [2024-07-25 09:36:42.111255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.416 [2024-07-25 09:36:42.111300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.416 [2024-07-25 09:36:42.111427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.416 [2024-07-25 09:36:42.111442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.416 [2024-07-25 09:36:42.111448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.416 [2024-07-25 09:36:42.111467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.416 [2024-07-25 09:36:42.111499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.416 [2024-07-25 09:36:42.111530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.416 [2024-07-25 09:36:42.111560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.416 [2024-07-25 09:36:42.111590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.416 [2024-07-25 09:36:42.111610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.416 [2024-07-25 09:36:42.111626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.416 [2024-07-25 09:36:42.111687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb3c0, cid 0, qid 0 00:22:09.416 [2024-07-25 09:36:42.111697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb540, cid 1, qid 0 00:22:09.416 [2024-07-25 09:36:42.111705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb6c0, cid 2, qid 0 00:22:09.416 [2024-07-25 09:36:42.111712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.416 [2024-07-25 09:36:42.111719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb9c0, cid 4, qid 0 00:22:09.416 [2024-07-25 09:36:42.111890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.416 [2024-07-25 09:36:42.111904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.416 [2024-07-25 09:36:42.111910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb9c0) on tqpair=0x98b540 00:22:09.416 [2024-07-25 09:36:42.111924] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:09.416 [2024-07-25 09:36:42.111933] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:09.416 [2024-07-25 09:36:42.111950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.111959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98b540) 00:22:09.416 [2024-07-25 09:36:42.111969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.416 [2024-07-25 09:36:42.111989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb9c0, cid 4, qid 0 00:22:09.416 [2024-07-25 09:36:42.112076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.416 [2024-07-25 09:36:42.112087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.416 [2024-07-25 09:36:42.112094] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.112100] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98b540): datao=0, datal=4096, cccid=4 00:22:09.416 [2024-07-25 09:36:42.112107] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eb9c0) on tqpair(0x98b540): expected_datao=0, payload_size=4096 00:22:09.416 [2024-07-25 09:36:42.112114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.112129] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.416 [2024-07-25 09:36:42.112137] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.679 [2024-07-25 09:36:42.152537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.679 [2024-07-25 09:36:42.152544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb9c0) on tqpair=0x98b540 00:22:09.679 [2024-07-25 09:36:42.152571] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:09.679 [2024-07-25 09:36:42.152608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98b540) 00:22:09.679 [2024-07-25 09:36:42.152631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.679 [2024-07-25 09:36:42.152643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98b540) 00:22:09.679 [2024-07-25 09:36:42.152671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.679 [2024-07-25 09:36:42.152699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb9c0, cid 4, qid 0 00:22:09.679 [2024-07-25 09:36:42.152711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ebb40, cid 5, qid 0 00:22:09.679 [2024-07-25 09:36:42.152845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.679 [2024-07-25 09:36:42.152859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.679 [2024-07-25 09:36:42.152866] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152872] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98b540): datao=0, datal=1024, cccid=4 00:22:09.679 [2024-07-25 09:36:42.152880] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eb9c0) on tqpair(0x98b540): expected_datao=0, payload_size=1024 00:22:09.679 [2024-07-25 09:36:42.152888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152898] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152905] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.679 [2024-07-25 09:36:42.152922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.679 [2024-07-25 09:36:42.152943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.152949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ebb40) on tqpair=0x98b540 00:22:09.679 [2024-07-25 09:36:42.194368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.679 [2024-07-25 09:36:42.194387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.679 [2024-07-25 09:36:42.194395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.679 [2024-07-25 09:36:42.194402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb9c0) on tqpair=0x98b540 00:22:09.680 [2024-07-25 09:36:42.194420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98b540) 00:22:09.680 [2024-07-25 09:36:42.194440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.680 [2024-07-25 09:36:42.194479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb9c0, cid 4, qid 0 00:22:09.680 [2024-07-25 09:36:42.194733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.680 [2024-07-25 09:36:42.194745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.680 [2024-07-25 09:36:42.194752] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194758] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98b540): datao=0, datal=3072, cccid=4 00:22:09.680 [2024-07-25 09:36:42.194765] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eb9c0) on tqpair(0x98b540): expected_datao=0, payload_size=3072 00:22:09.680 [2024-07-25 09:36:42.194772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194781] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194788] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.680 [2024-07-25 09:36:42.194808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.680 [2024-07-25 09:36:42.194815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb9c0) on tqpair=0x98b540 00:22:09.680 [2024-07-25 09:36:42.194843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.194852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98b540) 00:22:09.680 [2024-07-25 09:36:42.194863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.680 [2024-07-25 09:36:42.194891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb9c0, cid 4, qid 0 00:22:09.680 [2024-07-25 09:36:42.194988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.680 [2024-07-25 09:36:42.195000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.680 [2024-07-25 09:36:42.195006] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.195012] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98b540): datao=0, datal=8, cccid=4 00:22:09.680 [2024-07-25 09:36:42.195019] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eb9c0) on tqpair(0x98b540): expected_datao=0, payload_size=8 00:22:09.680 [2024-07-25 09:36:42.195026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.195035] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.195042] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.235529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.680 [2024-07-25 09:36:42.235547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.680 [2024-07-25 09:36:42.235554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.680 [2024-07-25 09:36:42.235561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb9c0) on tqpair=0x98b540 00:22:09.680 ===================================================== 00:22:09.680 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:09.680 ===================================================== 00:22:09.680 Controller Capabilities/Features 00:22:09.680 ================================ 00:22:09.680 Vendor ID: 0000 00:22:09.680 Subsystem Vendor ID: 0000 00:22:09.680 Serial Number: .................... 00:22:09.680 Model Number: ........................................ 00:22:09.680 Firmware Version: 24.09 00:22:09.680 Recommended Arb Burst: 0 00:22:09.680 IEEE OUI Identifier: 00 00 00 00:22:09.680 Multi-path I/O 00:22:09.680 May have multiple subsystem ports: No 00:22:09.680 May have multiple controllers: No 00:22:09.680 Associated with SR-IOV VF: No 00:22:09.680 Max Data Transfer Size: 131072 00:22:09.680 Max Number of Namespaces: 0 00:22:09.680 Max Number of I/O Queues: 1024 00:22:09.680 NVMe Specification Version (VS): 1.3 00:22:09.680 NVMe Specification Version (Identify): 1.3 00:22:09.680 Maximum Queue Entries: 128 00:22:09.680 Contiguous Queues Required: Yes 00:22:09.680 Arbitration Mechanisms Supported 00:22:09.680 Weighted Round Robin: Not Supported 00:22:09.680 Vendor Specific: Not Supported 00:22:09.680 Reset Timeout: 15000 ms 00:22:09.680 Doorbell Stride: 4 bytes 00:22:09.680 NVM Subsystem Reset: Not Supported 00:22:09.680 Command Sets Supported 00:22:09.680 NVM Command Set: Supported 00:22:09.680 Boot Partition: Not Supported 00:22:09.680 Memory Page Size Minimum: 4096 bytes 00:22:09.680 Memory Page Size Maximum: 4096 bytes 00:22:09.680 Persistent Memory Region: Not Supported 00:22:09.680 Optional Asynchronous Events Supported 00:22:09.680 Namespace Attribute Notices: Not Supported 00:22:09.680 Firmware Activation Notices: Not Supported 00:22:09.680 ANA Change Notices: Not Supported 00:22:09.680 PLE Aggregate Log Change Notices: Not Supported 00:22:09.680 LBA Status Info Alert Notices: Not Supported 00:22:09.680 EGE Aggregate Log Change Notices: Not Supported 00:22:09.680 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.680 Zone Descriptor Change Notices: Not Supported 00:22:09.680 Discovery Log Change Notices: Supported 00:22:09.680 Controller Attributes 00:22:09.680 128-bit Host Identifier: Not Supported 00:22:09.680 Non-Operational Permissive Mode: Not Supported 00:22:09.680 NVM Sets: Not Supported 00:22:09.680 Read Recovery Levels: Not Supported 00:22:09.680 Endurance Groups: Not Supported 00:22:09.680 Predictable Latency Mode: Not Supported 00:22:09.680 Traffic Based Keep ALive: Not Supported 00:22:09.680 Namespace Granularity: Not Supported 00:22:09.680 SQ Associations: Not Supported 00:22:09.680 UUID List: Not Supported 00:22:09.680 Multi-Domain Subsystem: Not Supported 00:22:09.680 Fixed Capacity Management: Not Supported 00:22:09.680 Variable Capacity Management: Not Supported 00:22:09.680 Delete Endurance Group: Not Supported 00:22:09.680 Delete NVM Set: Not Supported 00:22:09.680 Extended LBA Formats Supported: Not Supported 00:22:09.680 Flexible Data Placement Supported: Not Supported 00:22:09.680 00:22:09.680 Controller Memory Buffer Support 00:22:09.680 ================================ 00:22:09.680 Supported: No 00:22:09.680 00:22:09.680 Persistent Memory Region Support 00:22:09.680 ================================ 00:22:09.680 Supported: No 00:22:09.680 00:22:09.680 Admin Command Set Attributes 00:22:09.680 ============================ 00:22:09.680 Security Send/Receive: Not Supported 00:22:09.680 Format NVM: Not Supported 00:22:09.680 Firmware Activate/Download: Not Supported 00:22:09.680 Namespace Management: Not Supported 00:22:09.680 Device Self-Test: Not Supported 00:22:09.680 Directives: Not Supported 00:22:09.680 NVMe-MI: Not Supported 00:22:09.680 Virtualization Management: Not Supported 00:22:09.680 Doorbell Buffer Config: Not Supported 00:22:09.680 Get LBA Status Capability: Not Supported 00:22:09.680 Command & Feature Lockdown Capability: Not Supported 00:22:09.680 Abort Command Limit: 1 00:22:09.680 Async Event Request Limit: 4 00:22:09.680 Number of Firmware Slots: N/A 00:22:09.680 Firmware Slot 1 Read-Only: N/A 00:22:09.680 Firmware Activation Without Reset: N/A 00:22:09.680 Multiple Update Detection Support: N/A 00:22:09.680 Firmware Update Granularity: No Information Provided 00:22:09.680 Per-Namespace SMART Log: No 00:22:09.680 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.680 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:09.680 Command Effects Log Page: Not Supported 00:22:09.680 Get Log Page Extended Data: Supported 00:22:09.680 Telemetry Log Pages: Not Supported 00:22:09.680 Persistent Event Log Pages: Not Supported 00:22:09.680 Supported Log Pages Log Page: May Support 00:22:09.680 Commands Supported & Effects Log Page: Not Supported 00:22:09.680 Feature Identifiers & Effects Log Page:May Support 00:22:09.680 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.680 Data Area 4 for Telemetry Log: Not Supported 00:22:09.680 Error Log Page Entries Supported: 128 00:22:09.680 Keep Alive: Not Supported 00:22:09.680 00:22:09.680 NVM Command Set Attributes 00:22:09.680 ========================== 00:22:09.680 Submission Queue Entry Size 00:22:09.680 Max: 1 00:22:09.680 Min: 1 00:22:09.680 Completion Queue Entry Size 00:22:09.680 Max: 1 00:22:09.680 Min: 1 00:22:09.680 Number of Namespaces: 0 00:22:09.680 Compare Command: Not Supported 00:22:09.680 Write Uncorrectable Command: Not Supported 00:22:09.680 Dataset Management Command: Not Supported 00:22:09.680 Write Zeroes Command: Not Supported 00:22:09.680 Set Features Save Field: Not Supported 00:22:09.680 Reservations: Not Supported 00:22:09.680 Timestamp: Not Supported 00:22:09.680 Copy: Not Supported 00:22:09.680 Volatile Write Cache: Not Present 00:22:09.680 Atomic Write Unit (Normal): 1 00:22:09.680 Atomic Write Unit (PFail): 1 00:22:09.680 Atomic Compare & Write Unit: 1 00:22:09.681 Fused Compare & Write: Supported 00:22:09.681 Scatter-Gather List 00:22:09.681 SGL Command Set: Supported 00:22:09.681 SGL Keyed: Supported 00:22:09.681 SGL Bit Bucket Descriptor: Not Supported 00:22:09.681 SGL Metadata Pointer: Not Supported 00:22:09.681 Oversized SGL: Not Supported 00:22:09.681 SGL Metadata Address: Not Supported 00:22:09.681 SGL Offset: Supported 00:22:09.681 Transport SGL Data Block: Not Supported 00:22:09.681 Replay Protected Memory Block: Not Supported 00:22:09.681 00:22:09.681 Firmware Slot Information 00:22:09.681 ========================= 00:22:09.681 Active slot: 0 00:22:09.681 00:22:09.681 00:22:09.681 Error Log 00:22:09.681 ========= 00:22:09.681 00:22:09.681 Active Namespaces 00:22:09.681 ================= 00:22:09.681 Discovery Log Page 00:22:09.681 ================== 00:22:09.681 Generation Counter: 2 00:22:09.681 Number of Records: 2 00:22:09.681 Record Format: 0 00:22:09.681 00:22:09.681 Discovery Log Entry 0 00:22:09.681 ---------------------- 00:22:09.681 Transport Type: 3 (TCP) 00:22:09.681 Address Family: 1 (IPv4) 00:22:09.681 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:09.681 Entry Flags: 00:22:09.681 Duplicate Returned Information: 1 00:22:09.681 Explicit Persistent Connection Support for Discovery: 1 00:22:09.681 Transport Requirements: 00:22:09.681 Secure Channel: Not Required 00:22:09.681 Port ID: 0 (0x0000) 00:22:09.681 Controller ID: 65535 (0xffff) 00:22:09.681 Admin Max SQ Size: 128 00:22:09.681 Transport Service Identifier: 4420 00:22:09.681 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:09.681 Transport Address: 10.0.0.2 00:22:09.681 Discovery Log Entry 1 00:22:09.681 ---------------------- 00:22:09.681 Transport Type: 3 (TCP) 00:22:09.681 Address Family: 1 (IPv4) 00:22:09.681 Subsystem Type: 2 (NVM Subsystem) 00:22:09.681 Entry Flags: 00:22:09.681 Duplicate Returned Information: 0 00:22:09.681 Explicit Persistent Connection Support for Discovery: 0 00:22:09.681 Transport Requirements: 00:22:09.681 Secure Channel: Not Required 00:22:09.681 Port ID: 0 (0x0000) 00:22:09.681 Controller ID: 65535 (0xffff) 00:22:09.681 Admin Max SQ Size: 128 00:22:09.681 Transport Service Identifier: 4420 00:22:09.681 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:09.681 Transport Address: 10.0.0.2 [2024-07-25 09:36:42.235684] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:09.681 [2024-07-25 09:36:42.235705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb3c0) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.235715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.681 [2024-07-25 09:36:42.235724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb540) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.235731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.681 [2024-07-25 09:36:42.235739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb6c0) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.235746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.681 [2024-07-25 09:36:42.235754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.235761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.681 [2024-07-25 09:36:42.235778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.235787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.235793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.235803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.235837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.235988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.681 [2024-07-25 09:36:42.236001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.681 [2024-07-25 09:36:42.236008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.236029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.236053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.236079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.236195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.681 [2024-07-25 09:36:42.236208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.681 [2024-07-25 09:36:42.236215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.236229] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:09.681 [2024-07-25 09:36:42.236236] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:09.681 [2024-07-25 09:36:42.236251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.236276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.236296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.236406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.681 [2024-07-25 09:36:42.236421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.681 [2024-07-25 09:36:42.236428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.236451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.236477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.236498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.236571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.681 [2024-07-25 09:36:42.236583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.681 [2024-07-25 09:36:42.236589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.236611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.236650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.236671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.236743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.681 [2024-07-25 09:36:42.236754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.681 [2024-07-25 09:36:42.236761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.236787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.236811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.236830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.236903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.681 [2024-07-25 09:36:42.236916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.681 [2024-07-25 09:36:42.236922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.681 [2024-07-25 09:36:42.236944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.681 [2024-07-25 09:36:42.236959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.681 [2024-07-25 09:36:42.236968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.681 [2024-07-25 09:36:42.236988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.681 [2024-07-25 09:36:42.237074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.237087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.237093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.237099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.682 [2024-07-25 09:36:42.237115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.237123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.237129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.682 [2024-07-25 09:36:42.237139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.237159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.682 [2024-07-25 09:36:42.237231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.237244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.237251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.237257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.682 [2024-07-25 09:36:42.237272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.237281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.237287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.682 [2024-07-25 09:36:42.237297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.237316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.682 [2024-07-25 09:36:42.241386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.241402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.241409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.241416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.682 [2024-07-25 09:36:42.241437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.241455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.241461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98b540) 00:22:09.682 [2024-07-25 09:36:42.241472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.241494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eb840, cid 3, qid 0 00:22:09.682 [2024-07-25 09:36:42.241685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.241697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.241703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.241709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eb840) on tqpair=0x98b540 00:22:09.682 [2024-07-25 09:36:42.241722] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:09.682 00:22:09.682 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:09.682 [2024-07-25 09:36:42.273720] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:09.682 [2024-07-25 09:36:42.273764] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576946 ] 00:22:09.682 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.682 [2024-07-25 09:36:42.307181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:09.682 [2024-07-25 09:36:42.307229] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.682 [2024-07-25 09:36:42.307238] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.682 [2024-07-25 09:36:42.307251] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.682 [2024-07-25 09:36:42.307262] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.682 [2024-07-25 09:36:42.307531] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:09.682 [2024-07-25 09:36:42.307568] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5ec540 0 00:22:09.682 [2024-07-25 09:36:42.314371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.682 [2024-07-25 09:36:42.314393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.682 [2024-07-25 09:36:42.314416] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.682 [2024-07-25 09:36:42.314423] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.682 [2024-07-25 09:36:42.314470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.314481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.314488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.682 [2024-07-25 09:36:42.314502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.682 [2024-07-25 09:36:42.314529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.682 [2024-07-25 09:36:42.322369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.322393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.322401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.682 [2024-07-25 09:36:42.322426] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.682 [2024-07-25 09:36:42.322442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:09.682 [2024-07-25 09:36:42.322452] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:09.682 [2024-07-25 09:36:42.322468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.682 [2024-07-25 09:36:42.322494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.322517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.682 [2024-07-25 09:36:42.322644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.322670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.322676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.682 [2024-07-25 09:36:42.322694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:09.682 [2024-07-25 09:36:42.322707] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:09.682 [2024-07-25 09:36:42.322719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.682 [2024-07-25 09:36:42.322742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.322762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.682 [2024-07-25 09:36:42.322842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.322855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.322861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.682 [2024-07-25 09:36:42.322875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:09.682 [2024-07-25 09:36:42.322888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.682 [2024-07-25 09:36:42.322900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.322913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.682 [2024-07-25 09:36:42.322922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.322943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.682 [2024-07-25 09:36:42.323017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.323029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.323039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.323046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.682 [2024-07-25 09:36:42.323054] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.682 [2024-07-25 09:36:42.323069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.323078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.323084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.682 [2024-07-25 09:36:42.323094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.682 [2024-07-25 09:36:42.323114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.682 [2024-07-25 09:36:42.323183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.682 [2024-07-25 09:36:42.323194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.682 [2024-07-25 09:36:42.323201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.682 [2024-07-25 09:36:42.323207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.683 [2024-07-25 09:36:42.323214] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:09.683 [2024-07-25 09:36:42.323222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:09.683 [2024-07-25 09:36:42.323234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.683 [2024-07-25 09:36:42.323347] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:09.683 [2024-07-25 09:36:42.323354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.683 [2024-07-25 09:36:42.323379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.323403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.683 [2024-07-25 09:36:42.323424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.683 [2024-07-25 09:36:42.323558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.683 [2024-07-25 09:36:42.323572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.683 [2024-07-25 09:36:42.323578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.683 [2024-07-25 09:36:42.323592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.683 [2024-07-25 09:36:42.323608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.323634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.683 [2024-07-25 09:36:42.323655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.683 [2024-07-25 09:36:42.323741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.683 [2024-07-25 09:36:42.323753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.683 [2024-07-25 09:36:42.323762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.683 [2024-07-25 09:36:42.323776] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.683 [2024-07-25 09:36:42.323784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.323797] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:09.683 [2024-07-25 09:36:42.323809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.323822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.323839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.683 [2024-07-25 09:36:42.323860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.683 [2024-07-25 09:36:42.323968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.683 [2024-07-25 09:36:42.323979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.683 [2024-07-25 09:36:42.323986] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.323992] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=4096, cccid=0 00:22:09.683 [2024-07-25 09:36:42.323999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64c3c0) on tqpair(0x5ec540): expected_datao=0, payload_size=4096 00:22:09.683 [2024-07-25 09:36:42.324006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324015] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.683 [2024-07-25 09:36:42.324043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.683 [2024-07-25 09:36:42.324049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.683 [2024-07-25 09:36:42.324065] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:09.683 [2024-07-25 09:36:42.324073] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:09.683 [2024-07-25 09:36:42.324080] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:09.683 [2024-07-25 09:36:42.324086] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:09.683 [2024-07-25 09:36:42.324093] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:09.683 [2024-07-25 09:36:42.324100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.324155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.683 [2024-07-25 09:36:42.324176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.683 [2024-07-25 09:36:42.324267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.683 [2024-07-25 09:36:42.324279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.683 [2024-07-25 09:36:42.324286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.683 [2024-07-25 09:36:42.324301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.324324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.683 [2024-07-25 09:36:42.324349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.324380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.683 [2024-07-25 09:36:42.324390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.324412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.683 [2024-07-25 09:36:42.324422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.324443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.683 [2024-07-25 09:36:42.324452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.683 [2024-07-25 09:36:42.324503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.683 [2024-07-25 09:36:42.324531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c3c0, cid 0, qid 0 00:22:09.683 [2024-07-25 09:36:42.324542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c540, cid 1, qid 0 00:22:09.683 [2024-07-25 09:36:42.324550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c6c0, cid 2, qid 0 00:22:09.683 [2024-07-25 09:36:42.324557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.683 [2024-07-25 09:36:42.324565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.683 [2024-07-25 09:36:42.324771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.683 [2024-07-25 09:36:42.324784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.683 [2024-07-25 09:36:42.324793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.683 [2024-07-25 09:36:42.324807] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:09.683 [2024-07-25 09:36:42.324816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:09.683 [2024-07-25 09:36:42.324855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.683 [2024-07-25 09:36:42.324868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.684 [2024-07-25 09:36:42.324877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.684 [2024-07-25 09:36:42.324898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.684 [2024-07-25 09:36:42.325037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.684 [2024-07-25 09:36:42.325050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.684 [2024-07-25 09:36:42.325057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.684 [2024-07-25 09:36:42.325126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.325145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.325159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.684 [2024-07-25 09:36:42.325176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.684 [2024-07-25 09:36:42.325197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.684 [2024-07-25 09:36:42.325292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.684 [2024-07-25 09:36:42.325305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.684 [2024-07-25 09:36:42.325312] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325318] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=4096, cccid=4 00:22:09.684 [2024-07-25 09:36:42.325325] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64c9c0) on tqpair(0x5ec540): expected_datao=0, payload_size=4096 00:22:09.684 [2024-07-25 09:36:42.325332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325341] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325349] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.684 [2024-07-25 09:36:42.325394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.684 [2024-07-25 09:36:42.325400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.684 [2024-07-25 09:36:42.325421] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:09.684 [2024-07-25 09:36:42.325439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.325457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.325470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.684 [2024-07-25 09:36:42.325488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.684 [2024-07-25 09:36:42.325509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.684 [2024-07-25 09:36:42.325626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.684 [2024-07-25 09:36:42.325641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.684 [2024-07-25 09:36:42.325647] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325653] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=4096, cccid=4 00:22:09.684 [2024-07-25 09:36:42.325660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64c9c0) on tqpair(0x5ec540): expected_datao=0, payload_size=4096 00:22:09.684 [2024-07-25 09:36:42.325668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325677] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325700] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.684 [2024-07-25 09:36:42.325720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.684 [2024-07-25 09:36:42.325726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.684 [2024-07-25 09:36:42.325753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.325771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.325784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.684 [2024-07-25 09:36:42.325801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.684 [2024-07-25 09:36:42.325822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.684 [2024-07-25 09:36:42.325916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.684 [2024-07-25 09:36:42.325929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.684 [2024-07-25 09:36:42.325936] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325942] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=4096, cccid=4 00:22:09.684 [2024-07-25 09:36:42.325949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64c9c0) on tqpair(0x5ec540): expected_datao=0, payload_size=4096 00:22:09.684 [2024-07-25 09:36:42.325955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325965] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325972] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.325984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.684 [2024-07-25 09:36:42.325993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.684 [2024-07-25 09:36:42.326002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.326009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.684 [2024-07-25 09:36:42.326020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326084] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:09.684 [2024-07-25 09:36:42.326092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:09.684 [2024-07-25 09:36:42.326100] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:09.684 [2024-07-25 09:36:42.326118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.326126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.684 [2024-07-25 09:36:42.326135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.684 [2024-07-25 09:36:42.326146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.326153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.684 [2024-07-25 09:36:42.326158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5ec540) 00:22:09.684 [2024-07-25 09:36:42.326167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.684 [2024-07-25 09:36:42.326201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.684 [2024-07-25 09:36:42.326211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64cb40, cid 5, qid 0 00:22:09.685 [2024-07-25 09:36:42.330381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.330397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.330404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.330420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.330429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.330436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64cb40) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.330459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.330478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.330500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64cb40, cid 5, qid 0 00:22:09.685 [2024-07-25 09:36:42.330638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.330650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.330671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64cb40) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.330693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.330712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.330732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64cb40, cid 5, qid 0 00:22:09.685 [2024-07-25 09:36:42.330810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.330823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.330829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64cb40) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.330850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.330868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.330888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64cb40, cid 5, qid 0 00:22:09.685 [2024-07-25 09:36:42.330968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.330981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.330987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.330993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64cb40) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.331016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.331037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.331048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.331064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.331075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.331090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.331101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5ec540) 00:22:09.685 [2024-07-25 09:36:42.331117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.685 [2024-07-25 09:36:42.331137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64cb40, cid 5, qid 0 00:22:09.685 [2024-07-25 09:36:42.331151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c9c0, cid 4, qid 0 00:22:09.685 [2024-07-25 09:36:42.331159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64ccc0, cid 6, qid 0 00:22:09.685 [2024-07-25 09:36:42.331166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64ce40, cid 7, qid 0 00:22:09.685 [2024-07-25 09:36:42.331319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.685 [2024-07-25 09:36:42.331330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.685 [2024-07-25 09:36:42.331351] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331365] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=8192, cccid=5 00:22:09.685 [2024-07-25 09:36:42.331373] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64cb40) on tqpair(0x5ec540): expected_datao=0, payload_size=8192 00:22:09.685 [2024-07-25 09:36:42.331380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331399] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331423] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.685 [2024-07-25 09:36:42.331446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.685 [2024-07-25 09:36:42.331452] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331458] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=512, cccid=4 00:22:09.685 [2024-07-25 09:36:42.331466] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64c9c0) on tqpair(0x5ec540): expected_datao=0, payload_size=512 00:22:09.685 [2024-07-25 09:36:42.331473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331483] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331490] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.685 [2024-07-25 09:36:42.331507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.685 [2024-07-25 09:36:42.331513] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=512, cccid=6 00:22:09.685 [2024-07-25 09:36:42.331527] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64ccc0) on tqpair(0x5ec540): expected_datao=0, payload_size=512 00:22:09.685 [2024-07-25 09:36:42.331534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331543] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331550] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.685 [2024-07-25 09:36:42.331567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.685 [2024-07-25 09:36:42.331574] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331580] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5ec540): datao=0, datal=4096, cccid=7 00:22:09.685 [2024-07-25 09:36:42.331587] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x64ce40) on tqpair(0x5ec540): expected_datao=0, payload_size=4096 00:22:09.685 [2024-07-25 09:36:42.331594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331604] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.331633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.331639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64cb40) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.331683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.331694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.331700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c9c0) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.331739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.331749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.331755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64ccc0) on tqpair=0x5ec540 00:22:09.685 [2024-07-25 09:36:42.331771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.685 [2024-07-25 09:36:42.331780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.685 [2024-07-25 09:36:42.331786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.685 [2024-07-25 09:36:42.331792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64ce40) on tqpair=0x5ec540 00:22:09.685 ===================================================== 00:22:09.685 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.685 ===================================================== 00:22:09.685 Controller Capabilities/Features 00:22:09.685 ================================ 00:22:09.685 Vendor ID: 8086 00:22:09.685 Subsystem Vendor ID: 8086 00:22:09.685 Serial Number: SPDK00000000000001 00:22:09.685 Model Number: SPDK bdev Controller 00:22:09.685 Firmware Version: 24.09 00:22:09.685 Recommended Arb Burst: 6 00:22:09.686 IEEE OUI Identifier: e4 d2 5c 00:22:09.686 Multi-path I/O 00:22:09.686 May have multiple subsystem ports: Yes 00:22:09.686 May have multiple controllers: Yes 00:22:09.686 Associated with SR-IOV VF: No 00:22:09.686 Max Data Transfer Size: 131072 00:22:09.686 Max Number of Namespaces: 32 00:22:09.686 Max Number of I/O Queues: 127 00:22:09.686 NVMe Specification Version (VS): 1.3 00:22:09.686 NVMe Specification Version (Identify): 1.3 00:22:09.686 Maximum Queue Entries: 128 00:22:09.686 Contiguous Queues Required: Yes 00:22:09.686 Arbitration Mechanisms Supported 00:22:09.686 Weighted Round Robin: Not Supported 00:22:09.686 Vendor Specific: Not Supported 00:22:09.686 Reset Timeout: 15000 ms 00:22:09.686 Doorbell Stride: 4 bytes 00:22:09.686 NVM Subsystem Reset: Not Supported 00:22:09.686 Command Sets Supported 00:22:09.686 NVM Command Set: Supported 00:22:09.686 Boot Partition: Not Supported 00:22:09.686 Memory Page Size Minimum: 4096 bytes 00:22:09.686 Memory Page Size Maximum: 4096 bytes 00:22:09.686 Persistent Memory Region: Not Supported 00:22:09.686 Optional Asynchronous Events Supported 00:22:09.686 Namespace Attribute Notices: Supported 00:22:09.686 Firmware Activation Notices: Not Supported 00:22:09.686 ANA Change Notices: Not Supported 00:22:09.686 PLE Aggregate Log Change Notices: Not Supported 00:22:09.686 LBA Status Info Alert Notices: Not Supported 00:22:09.686 EGE Aggregate Log Change Notices: Not Supported 00:22:09.686 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.686 Zone Descriptor Change Notices: Not Supported 00:22:09.686 Discovery Log Change Notices: Not Supported 00:22:09.686 Controller Attributes 00:22:09.686 128-bit Host Identifier: Supported 00:22:09.686 Non-Operational Permissive Mode: Not Supported 00:22:09.686 NVM Sets: Not Supported 00:22:09.686 Read Recovery Levels: Not Supported 00:22:09.686 Endurance Groups: Not Supported 00:22:09.686 Predictable Latency Mode: Not Supported 00:22:09.686 Traffic Based Keep ALive: Not Supported 00:22:09.686 Namespace Granularity: Not Supported 00:22:09.686 SQ Associations: Not Supported 00:22:09.686 UUID List: Not Supported 00:22:09.686 Multi-Domain Subsystem: Not Supported 00:22:09.686 Fixed Capacity Management: Not Supported 00:22:09.686 Variable Capacity Management: Not Supported 00:22:09.686 Delete Endurance Group: Not Supported 00:22:09.686 Delete NVM Set: Not Supported 00:22:09.686 Extended LBA Formats Supported: Not Supported 00:22:09.686 Flexible Data Placement Supported: Not Supported 00:22:09.686 00:22:09.686 Controller Memory Buffer Support 00:22:09.686 ================================ 00:22:09.686 Supported: No 00:22:09.686 00:22:09.686 Persistent Memory Region Support 00:22:09.686 ================================ 00:22:09.686 Supported: No 00:22:09.686 00:22:09.686 Admin Command Set Attributes 00:22:09.686 ============================ 00:22:09.686 Security Send/Receive: Not Supported 00:22:09.686 Format NVM: Not Supported 00:22:09.686 Firmware Activate/Download: Not Supported 00:22:09.686 Namespace Management: Not Supported 00:22:09.686 Device Self-Test: Not Supported 00:22:09.686 Directives: Not Supported 00:22:09.686 NVMe-MI: Not Supported 00:22:09.686 Virtualization Management: Not Supported 00:22:09.686 Doorbell Buffer Config: Not Supported 00:22:09.686 Get LBA Status Capability: Not Supported 00:22:09.686 Command & Feature Lockdown Capability: Not Supported 00:22:09.686 Abort Command Limit: 4 00:22:09.686 Async Event Request Limit: 4 00:22:09.686 Number of Firmware Slots: N/A 00:22:09.686 Firmware Slot 1 Read-Only: N/A 00:22:09.686 Firmware Activation Without Reset: N/A 00:22:09.686 Multiple Update Detection Support: N/A 00:22:09.686 Firmware Update Granularity: No Information Provided 00:22:09.686 Per-Namespace SMART Log: No 00:22:09.686 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.686 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:09.686 Command Effects Log Page: Supported 00:22:09.686 Get Log Page Extended Data: Supported 00:22:09.686 Telemetry Log Pages: Not Supported 00:22:09.686 Persistent Event Log Pages: Not Supported 00:22:09.686 Supported Log Pages Log Page: May Support 00:22:09.686 Commands Supported & Effects Log Page: Not Supported 00:22:09.686 Feature Identifiers & Effects Log Page:May Support 00:22:09.686 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.686 Data Area 4 for Telemetry Log: Not Supported 00:22:09.686 Error Log Page Entries Supported: 128 00:22:09.686 Keep Alive: Supported 00:22:09.686 Keep Alive Granularity: 10000 ms 00:22:09.686 00:22:09.686 NVM Command Set Attributes 00:22:09.686 ========================== 00:22:09.686 Submission Queue Entry Size 00:22:09.686 Max: 64 00:22:09.686 Min: 64 00:22:09.686 Completion Queue Entry Size 00:22:09.686 Max: 16 00:22:09.686 Min: 16 00:22:09.686 Number of Namespaces: 32 00:22:09.686 Compare Command: Supported 00:22:09.686 Write Uncorrectable Command: Not Supported 00:22:09.686 Dataset Management Command: Supported 00:22:09.686 Write Zeroes Command: Supported 00:22:09.686 Set Features Save Field: Not Supported 00:22:09.686 Reservations: Supported 00:22:09.686 Timestamp: Not Supported 00:22:09.686 Copy: Supported 00:22:09.686 Volatile Write Cache: Present 00:22:09.686 Atomic Write Unit (Normal): 1 00:22:09.686 Atomic Write Unit (PFail): 1 00:22:09.686 Atomic Compare & Write Unit: 1 00:22:09.686 Fused Compare & Write: Supported 00:22:09.686 Scatter-Gather List 00:22:09.686 SGL Command Set: Supported 00:22:09.686 SGL Keyed: Supported 00:22:09.686 SGL Bit Bucket Descriptor: Not Supported 00:22:09.686 SGL Metadata Pointer: Not Supported 00:22:09.686 Oversized SGL: Not Supported 00:22:09.686 SGL Metadata Address: Not Supported 00:22:09.686 SGL Offset: Supported 00:22:09.686 Transport SGL Data Block: Not Supported 00:22:09.686 Replay Protected Memory Block: Not Supported 00:22:09.686 00:22:09.686 Firmware Slot Information 00:22:09.686 ========================= 00:22:09.686 Active slot: 1 00:22:09.686 Slot 1 Firmware Revision: 24.09 00:22:09.686 00:22:09.686 00:22:09.686 Commands Supported and Effects 00:22:09.686 ============================== 00:22:09.686 Admin Commands 00:22:09.686 -------------- 00:22:09.686 Get Log Page (02h): Supported 00:22:09.686 Identify (06h): Supported 00:22:09.686 Abort (08h): Supported 00:22:09.686 Set Features (09h): Supported 00:22:09.686 Get Features (0Ah): Supported 00:22:09.686 Asynchronous Event Request (0Ch): Supported 00:22:09.686 Keep Alive (18h): Supported 00:22:09.686 I/O Commands 00:22:09.686 ------------ 00:22:09.686 Flush (00h): Supported LBA-Change 00:22:09.686 Write (01h): Supported LBA-Change 00:22:09.686 Read (02h): Supported 00:22:09.686 Compare (05h): Supported 00:22:09.686 Write Zeroes (08h): Supported LBA-Change 00:22:09.686 Dataset Management (09h): Supported LBA-Change 00:22:09.686 Copy (19h): Supported LBA-Change 00:22:09.686 00:22:09.686 Error Log 00:22:09.686 ========= 00:22:09.686 00:22:09.686 Arbitration 00:22:09.686 =========== 00:22:09.686 Arbitration Burst: 1 00:22:09.686 00:22:09.686 Power Management 00:22:09.686 ================ 00:22:09.686 Number of Power States: 1 00:22:09.686 Current Power State: Power State #0 00:22:09.686 Power State #0: 00:22:09.686 Max Power: 0.00 W 00:22:09.686 Non-Operational State: Operational 00:22:09.686 Entry Latency: Not Reported 00:22:09.686 Exit Latency: Not Reported 00:22:09.686 Relative Read Throughput: 0 00:22:09.686 Relative Read Latency: 0 00:22:09.686 Relative Write Throughput: 0 00:22:09.686 Relative Write Latency: 0 00:22:09.686 Idle Power: Not Reported 00:22:09.686 Active Power: Not Reported 00:22:09.686 Non-Operational Permissive Mode: Not Supported 00:22:09.686 00:22:09.686 Health Information 00:22:09.686 ================== 00:22:09.686 Critical Warnings: 00:22:09.686 Available Spare Space: OK 00:22:09.686 Temperature: OK 00:22:09.686 Device Reliability: OK 00:22:09.686 Read Only: No 00:22:09.686 Volatile Memory Backup: OK 00:22:09.686 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:09.686 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:09.686 Available Spare: 0% 00:22:09.686 Available Spare Threshold: 0% 00:22:09.686 Life Percentage Used:[2024-07-25 09:36:42.331910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.686 [2024-07-25 09:36:42.331921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5ec540) 00:22:09.686 [2024-07-25 09:36:42.331932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.686 [2024-07-25 09:36:42.331953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64ce40, cid 7, qid 0 00:22:09.686 [2024-07-25 09:36:42.332081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.332092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.332098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64ce40) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332147] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:09.687 [2024-07-25 09:36:42.332165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c3c0) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.687 [2024-07-25 09:36:42.332182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c540) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.687 [2024-07-25 09:36:42.332197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c6c0) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.687 [2024-07-25 09:36:42.332212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.687 [2024-07-25 09:36:42.332230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.332253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.332281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.332445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.332460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.332466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.332507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.332533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.332621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.332633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.332639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332652] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:09.687 [2024-07-25 09:36:42.332674] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:09.687 [2024-07-25 09:36:42.332690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.332714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.332734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.332814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.332827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.332833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.332855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.332869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.332879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.332899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.332975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.332987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.332994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.333015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.333039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.333062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.333130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.333141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.333147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.333169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.333193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.333212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.333284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.333297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.333303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.333324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.333372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.333408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.333487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.333501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.333508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.333531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.333557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.333578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.333666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.333678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.333684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.333707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.333746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.333766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.333841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.333854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.333860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.687 [2024-07-25 09:36:42.333882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.333896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.687 [2024-07-25 09:36:42.333906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.687 [2024-07-25 09:36:42.333926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.687 [2024-07-25 09:36:42.333993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.687 [2024-07-25 09:36:42.334003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.687 [2024-07-25 09:36:42.334010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.687 [2024-07-25 09:36:42.334016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.688 [2024-07-25 09:36:42.334031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.334039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.334045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.688 [2024-07-25 09:36:42.334055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.688 [2024-07-25 09:36:42.334074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.688 [2024-07-25 09:36:42.334146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.688 [2024-07-25 09:36:42.334159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.688 [2024-07-25 09:36:42.334165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.334171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.688 [2024-07-25 09:36:42.334186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.334195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.334201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.688 [2024-07-25 09:36:42.334211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.688 [2024-07-25 09:36:42.334230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.688 [2024-07-25 09:36:42.334298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.688 [2024-07-25 09:36:42.334309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.688 [2024-07-25 09:36:42.334316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.334322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.688 [2024-07-25 09:36:42.334352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.338387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.338395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5ec540) 00:22:09.688 [2024-07-25 09:36:42.338406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.688 [2024-07-25 09:36:42.338429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x64c840, cid 3, qid 0 00:22:09.688 [2024-07-25 09:36:42.338607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.688 [2024-07-25 09:36:42.338624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.688 [2024-07-25 09:36:42.338631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.688 [2024-07-25 09:36:42.338638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x64c840) on tqpair=0x5ec540 00:22:09.688 [2024-07-25 09:36:42.338651] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:09.688 0% 00:22:09.688 Data Units Read: 0 00:22:09.688 Data Units Written: 0 00:22:09.688 Host Read Commands: 0 00:22:09.688 Host Write Commands: 0 00:22:09.688 Controller Busy Time: 0 minutes 00:22:09.688 Power Cycles: 0 00:22:09.688 Power On Hours: 0 hours 00:22:09.688 Unsafe Shutdowns: 0 00:22:09.688 Unrecoverable Media Errors: 0 00:22:09.688 Lifetime Error Log Entries: 0 00:22:09.688 Warning Temperature Time: 0 minutes 00:22:09.688 Critical Temperature Time: 0 minutes 00:22:09.688 00:22:09.688 Number of Queues 00:22:09.688 ================ 00:22:09.688 Number of I/O Submission Queues: 127 00:22:09.688 Number of I/O Completion Queues: 127 00:22:09.688 00:22:09.688 Active Namespaces 00:22:09.688 ================= 00:22:09.688 Namespace ID:1 00:22:09.688 Error Recovery Timeout: Unlimited 00:22:09.688 Command Set Identifier: NVM (00h) 00:22:09.688 Deallocate: Supported 00:22:09.688 Deallocated/Unwritten Error: Not Supported 00:22:09.688 Deallocated Read Value: Unknown 00:22:09.688 Deallocate in Write Zeroes: Not Supported 00:22:09.688 Deallocated Guard Field: 0xFFFF 00:22:09.688 Flush: Supported 00:22:09.688 Reservation: Supported 00:22:09.688 Namespace Sharing Capabilities: Multiple Controllers 00:22:09.688 Size (in LBAs): 131072 (0GiB) 00:22:09.688 Capacity (in LBAs): 131072 (0GiB) 00:22:09.688 Utilization (in LBAs): 131072 (0GiB) 00:22:09.688 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:09.688 EUI64: ABCDEF0123456789 00:22:09.688 UUID: f607c6fa-c941-4b98-aa99-deebd407ef33 00:22:09.688 Thin Provisioning: Not Supported 00:22:09.688 Per-NS Atomic Units: Yes 00:22:09.688 Atomic Boundary Size (Normal): 0 00:22:09.688 Atomic Boundary Size (PFail): 0 00:22:09.688 Atomic Boundary Offset: 0 00:22:09.688 Maximum Single Source Range Length: 65535 00:22:09.688 Maximum Copy Length: 65535 00:22:09.688 Maximum Source Range Count: 1 00:22:09.688 NGUID/EUI64 Never Reused: No 00:22:09.688 Namespace Write Protected: No 00:22:09.688 Number of LBA Formats: 1 00:22:09.688 Current LBA Format: LBA Format #00 00:22:09.688 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:09.688 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.688 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.688 rmmod nvme_tcp 00:22:09.688 rmmod nvme_fabrics 00:22:09.688 rmmod nvme_keyring 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 576789 ']' 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 576789 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 576789 ']' 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 576789 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 576789 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 576789' 00:22:09.946 killing process with pid 576789 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 576789 00:22:09.946 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 576789 00:22:10.208 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.208 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.208 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.208 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.208 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.208 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.209 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.209 09:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:12.108 00:22:12.108 real 0m5.900s 00:22:12.108 user 0m6.993s 00:22:12.108 sys 0m1.804s 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.108 ************************************ 00:22:12.108 END TEST nvmf_identify 00:22:12.108 ************************************ 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.108 09:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.109 ************************************ 00:22:12.109 START TEST nvmf_perf 00:22:12.109 ************************************ 00:22:12.109 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:12.367 * Looking for test storage... 00:22:12.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:12.367 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.368 09:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.267 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:14.268 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:14.268 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:14.268 Found net devices under 0000:82:00.0: cvl_0_0 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:14.268 Found net devices under 0000:82:00.1: cvl_0_1 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:22:14.268 00:22:14.268 --- 10.0.0.2 ping statistics --- 00:22:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.268 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:14.268 00:22:14.268 --- 10.0.0.1 ping statistics --- 00:22:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.268 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.268 09:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=578874 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 578874 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 578874 ']' 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.526 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.526 [2024-07-25 09:36:47.068564] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:14.526 [2024-07-25 09:36:47.068642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.526 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.526 [2024-07-25 09:36:47.136780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.526 [2024-07-25 09:36:47.253324] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.526 [2024-07-25 09:36:47.253385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.526 [2024-07-25 09:36:47.253427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.526 [2024-07-25 09:36:47.253439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.526 [2024-07-25 09:36:47.253450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.526 [2024-07-25 09:36:47.253510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.526 [2024-07-25 09:36:47.253558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.526 [2024-07-25 09:36:47.253605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.526 [2024-07-25 09:36:47.253607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.456 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.456 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:15.456 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.456 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.456 09:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.456 09:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.456 09:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:15.456 09:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:18.730 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:18.730 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:18.730 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:81:00.0 00:22:18.730 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:18.987 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:18.987 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:81:00.0 ']' 00:22:18.987 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:18.987 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:18.987 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:19.247 [2024-07-25 09:36:51.835532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.247 09:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.506 09:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:19.506 09:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.763 09:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:19.763 09:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:20.020 09:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.277 [2024-07-25 09:36:52.835165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.277 09:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.534 09:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:81:00.0 ']' 00:22:20.534 09:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:22:20.534 09:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:20.534 09:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:22:21.904 Initializing NVMe Controllers 00:22:21.904 Attached to NVMe Controller at 0000:81:00.0 [8086:0a54] 00:22:21.904 Associating PCIE (0000:81:00.0) NSID 1 with lcore 0 00:22:21.904 Initialization complete. Launching workers. 00:22:21.904 ======================================================== 00:22:21.904 Latency(us) 00:22:21.904 Device Information : IOPS MiB/s Average min max 00:22:21.904 PCIE (0000:81:00.0) NSID 1 from core 0: 85743.68 334.94 372.65 43.70 4520.95 00:22:21.904 ======================================================== 00:22:21.904 Total : 85743.68 334.94 372.65 43.70 4520.95 00:22:21.904 00:22:21.904 09:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:21.904 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.273 Initializing NVMe Controllers 00:22:23.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:23.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:23.273 Initialization complete. Launching workers. 00:22:23.273 ======================================================== 00:22:23.273 Latency(us) 00:22:23.273 Device Information : IOPS MiB/s Average min max 00:22:23.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 51.00 0.20 20050.36 137.83 45721.19 00:22:23.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15219.73 5983.82 47903.69 00:22:23.273 ======================================================== 00:22:23.273 Total : 117.00 0.46 17325.39 137.83 47903.69 00:22:23.273 00:22:23.273 09:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:23.273 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.644 Initializing NVMe Controllers 00:22:24.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:24.644 Initialization complete. Launching workers. 00:22:24.644 ======================================================== 00:22:24.644 Latency(us) 00:22:24.644 Device Information : IOPS MiB/s Average min max 00:22:24.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8654.00 33.80 3701.63 552.28 8212.72 00:22:24.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3932.00 15.36 8189.74 6765.50 15960.60 00:22:24.644 ======================================================== 00:22:24.644 Total : 12586.00 49.16 5103.76 552.28 15960.60 00:22:24.644 00:22:24.644 09:36:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:24.644 09:36:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:24.644 09:36:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:24.644 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.169 Initializing NVMe Controllers 00:22:27.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.169 Controller IO queue size 128, less than required. 00:22:27.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.169 Controller IO queue size 128, less than required. 00:22:27.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:27.169 Initialization complete. Launching workers. 00:22:27.169 ======================================================== 00:22:27.169 Latency(us) 00:22:27.169 Device Information : IOPS MiB/s Average min max 00:22:27.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.64 380.16 85562.11 49195.55 143181.37 00:22:27.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.40 146.10 232680.23 109811.21 340821.42 00:22:27.169 ======================================================== 00:22:27.169 Total : 2105.04 526.26 126405.01 49195.55 340821.42 00:22:27.169 00:22:27.169 09:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:27.169 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.427 No valid NVMe controllers or AIO or URING devices found 00:22:27.427 Initializing NVMe Controllers 00:22:27.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.427 Controller IO queue size 128, less than required. 00:22:27.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.427 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:27.427 Controller IO queue size 128, less than required. 00:22:27.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.427 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:27.427 WARNING: Some requested NVMe devices were skipped 00:22:27.684 09:37:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:27.684 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.208 Initializing NVMe Controllers 00:22:30.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.208 Controller IO queue size 128, less than required. 00:22:30.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.208 Controller IO queue size 128, less than required. 00:22:30.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.208 Initialization complete. Launching workers. 00:22:30.208 00:22:30.208 ==================== 00:22:30.208 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:30.208 TCP transport: 00:22:30.208 polls: 11003 00:22:30.208 idle_polls: 8213 00:22:30.208 sock_completions: 2790 00:22:30.208 nvme_completions: 5179 00:22:30.208 submitted_requests: 7676 00:22:30.208 queued_requests: 1 00:22:30.208 00:22:30.208 ==================== 00:22:30.208 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:30.208 TCP transport: 00:22:30.208 polls: 11127 00:22:30.208 idle_polls: 8134 00:22:30.208 sock_completions: 2993 00:22:30.208 nvme_completions: 5817 00:22:30.208 submitted_requests: 8746 00:22:30.208 queued_requests: 1 00:22:30.208 ======================================================== 00:22:30.208 Latency(us) 00:22:30.208 Device Information : IOPS MiB/s Average min max 00:22:30.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1294.39 323.60 102097.09 50758.52 175788.24 00:22:30.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1453.87 363.47 89106.50 46488.07 149401.70 00:22:30.208 ======================================================== 00:22:30.208 Total : 2748.26 687.06 95224.86 46488.07 175788.24 00:22:30.208 00:22:30.208 09:37:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:30.208 09:37:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.465 rmmod nvme_tcp 00:22:30.465 rmmod nvme_fabrics 00:22:30.465 rmmod nvme_keyring 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 578874 ']' 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 578874 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 578874 ']' 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 578874 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.465 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 578874 00:22:30.722 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:30.722 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:30.722 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 578874' 00:22:30.722 killing process with pid 578874 00:22:30.722 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 578874 00:22:30.722 09:37:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 578874 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.254 09:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.153 00:22:35.153 real 0m22.982s 00:22:35.153 user 1m12.675s 00:22:35.153 sys 0m5.622s 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.153 ************************************ 00:22:35.153 END TEST nvmf_perf 00:22:35.153 ************************************ 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.153 ************************************ 00:22:35.153 START TEST nvmf_fio_host 00:22:35.153 ************************************ 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:35.153 * Looking for test storage... 00:22:35.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.153 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.413 09:37:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.314 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:37.315 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:37.315 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:37.315 Found net devices under 0000:82:00.0: cvl_0_0 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:37.315 Found net devices under 0000:82:00.1: cvl_0_1 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:22:37.315 00:22:37.315 --- 10.0.0.2 ping statistics --- 00:22:37.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.315 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:37.315 00:22:37.315 --- 10.0.0.1 ping statistics --- 00:22:37.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.315 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.315 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=582971 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 582971 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 582971 ']' 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.316 09:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.316 [2024-07-25 09:37:09.963195] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:37.316 [2024-07-25 09:37:09.963287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.316 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.316 [2024-07-25 09:37:10.035070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.574 [2024-07-25 09:37:10.154325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.574 [2024-07-25 09:37:10.154398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.574 [2024-07-25 09:37:10.154416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.574 [2024-07-25 09:37:10.154429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.574 [2024-07-25 09:37:10.154441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.574 [2024-07-25 09:37:10.154505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.574 [2024-07-25 09:37:10.154575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.574 [2024-07-25 09:37:10.154677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.574 [2024-07-25 09:37:10.154680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.505 09:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.505 09:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:38.505 09:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:38.505 [2024-07-25 09:37:11.121309] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.505 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:38.505 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.505 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.505 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:38.762 Malloc1 00:22:38.762 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.019 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.275 09:37:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.534 [2024-07-25 09:37:12.156131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.534 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:39.818 09:37:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.094 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:40.094 fio-3.35 00:22:40.094 Starting 1 thread 00:22:40.094 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.620 00:22:42.620 test: (groupid=0, jobs=1): err= 0: pid=583462: Thu Jul 25 09:37:14 2024 00:22:42.620 read: IOPS=9053, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2007msec) 00:22:42.620 slat (usec): min=2, max=186, avg= 3.20, stdev= 3.03 00:22:42.620 clat (usec): min=2447, max=12976, avg=7710.51, stdev=626.23 00:22:42.620 lat (usec): min=2469, max=12979, avg=7713.71, stdev=626.11 00:22:42.620 clat percentiles (usec): 00:22:42.620 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7177], 00:22:42.620 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7898], 00:22:42.620 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:22:42.620 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11076], 99.95th=[11600], 00:22:42.620 | 99.99th=[12518] 00:22:42.620 bw ( KiB/s): min=35640, max=36688, per=99.93%, avg=36188.00, stdev=540.95, samples=4 00:22:42.620 iops : min= 8910, max= 9172, avg=9047.00, stdev=135.24, samples=4 00:22:42.620 write: IOPS=9065, BW=35.4MiB/s (37.1MB/s)(71.1MiB/2007msec); 0 zone resets 00:22:42.620 slat (usec): min=2, max=165, avg= 3.35, stdev= 2.77 00:22:42.620 clat (usec): min=1439, max=11606, avg=6363.68, stdev=523.46 00:22:42.620 lat (usec): min=1446, max=11609, avg=6367.04, stdev=523.35 00:22:42.620 clat percentiles (usec): 00:22:42.620 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:22:42.620 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:22:42.620 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:22:42.620 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[10159], 99.95th=[10552], 00:22:42.620 | 99.99th=[11600] 00:22:42.620 bw ( KiB/s): min=35680, max=36880, per=100.00%, avg=36288.00, stdev=581.71, samples=4 00:22:42.620 iops : min= 8920, max= 9220, avg=9072.00, stdev=145.43, samples=4 00:22:42.620 lat (msec) : 2=0.03%, 4=0.11%, 10=99.71%, 20=0.15% 00:22:42.620 cpu : usr=67.70%, sys=29.16%, ctx=171, majf=0, minf=40 00:22:42.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:42.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:42.620 issued rwts: total=18170,18195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:42.620 00:22:42.620 Run status group 0 (all jobs): 00:22:42.620 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2007-2007msec 00:22:42.620 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.1MiB (74.5MB), run=2007-2007msec 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:22:42.620 09:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:22:42.620 09:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:22:42.620 09:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:22:42.620 09:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:42.620 09:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:42.620 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:42.620 fio-3.35 00:22:42.620 Starting 1 thread 00:22:42.620 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.146 00:22:45.146 test: (groupid=0, jobs=1): err= 0: pid=583794: Thu Jul 25 09:37:17 2024 00:22:45.146 read: IOPS=8356, BW=131MiB/s (137MB/s)(262MiB/2009msec) 00:22:45.146 slat (usec): min=2, max=127, avg= 4.14, stdev= 2.41 00:22:45.146 clat (usec): min=1989, max=16873, avg=8672.94, stdev=2129.92 00:22:45.146 lat (usec): min=1993, max=16877, avg=8677.08, stdev=2129.95 00:22:45.146 clat percentiles (usec): 00:22:45.146 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6915], 00:22:45.146 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110], 00:22:45.146 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[11338], 95.00th=[12387], 00:22:45.146 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16712], 99.95th=[16712], 00:22:45.146 | 99.99th=[16909] 00:22:45.146 bw ( KiB/s): min=62976, max=77632, per=52.27%, avg=69880.00, stdev=6778.78, samples=4 00:22:45.146 iops : min= 3936, max= 4852, avg=4367.50, stdev=423.67, samples=4 00:22:45.146 write: IOPS=4900, BW=76.6MiB/s (80.3MB/s)(143MiB/1863msec); 0 zone resets 00:22:45.146 slat (usec): min=30, max=208, avg=37.49, stdev= 6.17 00:22:45.146 clat (usec): min=4836, max=18522, avg=11404.57, stdev=1955.17 00:22:45.146 lat (usec): min=4872, max=18557, avg=11442.05, stdev=1955.00 00:22:45.146 clat percentiles (usec): 00:22:45.146 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:22:45.146 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:22:45.146 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14091], 95.00th=[15139], 00:22:45.146 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:22:45.146 | 99.99th=[18482] 00:22:45.146 bw ( KiB/s): min=65280, max=80064, per=92.30%, avg=72368.00, stdev=6810.39, samples=4 00:22:45.146 iops : min= 4080, max= 5004, avg=4523.00, stdev=425.65, samples=4 00:22:45.146 lat (msec) : 2=0.01%, 4=0.15%, 10=57.87%, 20=41.98% 00:22:45.146 cpu : usr=83.81%, sys=15.09%, ctx=43, majf=0, minf=62 00:22:45.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:45.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:45.146 issued rwts: total=16788,9129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:45.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:45.146 00:22:45.146 Run status group 0 (all jobs): 00:22:45.146 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2009-2009msec 00:22:45.146 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=143MiB (150MB), run=1863-1863msec 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.146 rmmod nvme_tcp 00:22:45.146 rmmod nvme_fabrics 00:22:45.146 rmmod nvme_keyring 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 582971 ']' 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 582971 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 582971 ']' 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 582971 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 582971 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 582971' 00:22:45.146 killing process with pid 582971 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 582971 00:22:45.146 09:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 582971 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.404 09:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.934 00:22:47.934 real 0m12.321s 00:22:47.934 user 0m37.442s 00:22:47.934 sys 0m3.671s 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.934 ************************************ 00:22:47.934 END TEST nvmf_fio_host 00:22:47.934 ************************************ 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.934 ************************************ 00:22:47.934 START TEST nvmf_failover 00:22:47.934 ************************************ 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:47.934 * Looking for test storage... 00:22:47.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.934 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.935 09:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.834 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:49.835 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:49.835 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:49.835 Found net devices under 0000:82:00.0: cvl_0_0 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:49.835 Found net devices under 0000:82:00.1: cvl_0_1 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:49.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:22:49.835 00:22:49.835 --- 10.0.0.2 ping statistics --- 00:22:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.835 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:49.835 00:22:49.835 --- 10.0.0.1 ping statistics --- 00:22:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.835 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=585987 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 585987 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 585987 ']' 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.835 09:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.836 [2024-07-25 09:37:22.301828] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:49.836 [2024-07-25 09:37:22.301917] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.836 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.836 [2024-07-25 09:37:22.372346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:49.836 [2024-07-25 09:37:22.487798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.836 [2024-07-25 09:37:22.487861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.836 [2024-07-25 09:37:22.487878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.836 [2024-07-25 09:37:22.487892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.836 [2024-07-25 09:37:22.487904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.836 [2024-07-25 09:37:22.487994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.836 [2024-07-25 09:37:22.488111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.836 [2024-07-25 09:37:22.488114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:50.782 [2024-07-25 09:37:23.470072] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.782 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:51.045 Malloc0 00:22:51.045 09:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.302 09:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:51.559 09:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.815 [2024-07-25 09:37:24.482834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.815 09:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:52.073 [2024-07-25 09:37:24.723619] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:52.073 09:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:52.330 [2024-07-25 09:37:24.984488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=586402 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 586402 /var/tmp/bdevperf.sock 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 586402 ']' 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.330 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:52.894 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.894 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:52.894 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.151 NVMe0n1 00:22:53.151 09:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.714 00:22:53.714 09:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=586544 00:22:53.714 09:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.714 09:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:54.647 09:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.904 09:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:58.180 09:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.437 00:22:58.437 09:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:58.694 [2024-07-25 09:37:31.356014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.694 [2024-07-25 09:37:31.356081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.694 [2024-07-25 09:37:31.356111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.694 [2024-07-25 09:37:31.356125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.694 [2024-07-25 09:37:31.356138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.694 [2024-07-25 09:37:31.356151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 [2024-07-25 09:37:31.356800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339d10 is same with the state(5) to be set 00:22:58.695 09:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:01.970 09:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.970 [2024-07-25 09:37:34.662982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.970 09:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:03.341 09:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:03.341 [2024-07-25 09:37:35.942535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aab0 is same with the state(5) to be set 00:23:03.341 [2024-07-25 09:37:35.942604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aab0 is same with the state(5) to be set 00:23:03.341 [2024-07-25 09:37:35.942620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aab0 is same with the state(5) to be set 00:23:03.341 [2024-07-25 09:37:35.942633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aab0 is same with the state(5) to be set 00:23:03.341 [2024-07-25 09:37:35.942655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aab0 is same with the state(5) to be set 00:23:03.341 [2024-07-25 09:37:35.942684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aab0 is same with the state(5) to be set 00:23:03.341 09:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 586544 00:23:09.901 0 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 586402 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 586402 ']' 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 586402 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 586402 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 586402' 00:23:09.901 killing process with pid 586402 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 586402 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 586402 00:23:09.901 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:09.901 [2024-07-25 09:37:25.047724] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:09.901 [2024-07-25 09:37:25.047807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586402 ] 00:23:09.901 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.901 [2024-07-25 09:37:25.110597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.901 [2024-07-25 09:37:25.219305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.901 Running I/O for 15 seconds... 00:23:09.901 [2024-07-25 09:37:27.551399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.901 [2024-07-25 09:37:27.551455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.901 [2024-07-25 09:37:27.551606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.551980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.551994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.901 [2024-07-25 09:37:27.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.901 [2024-07-25 09:37:27.552281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.552980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.552995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.902 [2024-07-25 09:37:27.553450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.902 [2024-07-25 09:37:27.553465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.903 [2024-07-25 09:37:27.553819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.553867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84432 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.553880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.553910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.553922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84440 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.553934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.553959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.553969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84448 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.553982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.553995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84456 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84464 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84472 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84480 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84488 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84496 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84504 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84512 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84520 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84528 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84536 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.903 [2024-07-25 09:37:27.554597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.903 [2024-07-25 09:37:27.554608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84552 len:8 PRP1 0x0 PRP2 0x0 00:23:09.903 [2024-07-25 09:37:27.554620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.903 [2024-07-25 09:37:27.554633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84560 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84568 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83592 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83600 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.554957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.554969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.554980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.554991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83608 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83616 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83632 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83640 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83648 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83656 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83664 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83672 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83680 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83688 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83696 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83712 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83720 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.904 [2024-07-25 09:37:27.555716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83728 len:8 PRP1 0x0 PRP2 0x0 00:23:09.904 [2024-07-25 09:37:27.555729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.904 [2024-07-25 09:37:27.555742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.904 [2024-07-25 09:37:27.555753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.555764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83736 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.555789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.555800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.555811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83744 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.555823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.555836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.555846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.555857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83752 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.555869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.555881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.555892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.555903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83760 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.555915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.555927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.555938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.555952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83768 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.555965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.555978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.555989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.555999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83776 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.556011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.556034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.556045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83784 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.556057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.556080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.556091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83792 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.556103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.556126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.556136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83800 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.556149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.905 [2024-07-25 09:37:27.556172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.905 [2024-07-25 09:37:27.556182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83808 len:8 PRP1 0x0 PRP2 0x0 00:23:09.905 [2024-07-25 09:37:27.556195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556255] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe96c10 was disconnected and freed. reset controller. 00:23:09.905 [2024-07-25 09:37:27.556273] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:09.905 [2024-07-25 09:37:27.556308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.905 [2024-07-25 09:37:27.556326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.905 [2024-07-25 09:37:27.556353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.905 [2024-07-25 09:37:27.556393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.905 [2024-07-25 09:37:27.556424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:27.556436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.905 [2024-07-25 09:37:27.556505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe790f0 (9): Bad file descriptor 00:23:09.905 [2024-07-25 09:37:27.559732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.905 [2024-07-25 09:37:27.633300] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:09.905 [2024-07-25 09:37:31.358896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.905 [2024-07-25 09:37:31.358954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.905 [2024-07-25 09:37:31.359226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.905 [2024-07-25 09:37:31.359241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.359976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.359991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.906 [2024-07-25 09:37:31.360422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.906 [2024-07-25 09:37:31.360437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.360980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.360993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.907 [2024-07-25 09:37:31.361079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.907 [2024-07-25 09:37:31.361106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.907 [2024-07-25 09:37:31.361140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.907 [2024-07-25 09:37:31.361564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.907 [2024-07-25 09:37:31.361577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.908 [2024-07-25 09:37:31.361863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.361910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115256 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.361923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.361970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.908 [2024-07-25 09:37:31.361990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.908 [2024-07-25 09:37:31.362027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.908 [2024-07-25 09:37:31.362055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.908 [2024-07-25 09:37:31.362082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe790f0 is same with the state(5) to be set 00:23:09.908 [2024-07-25 09:37:31.362328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115264 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115272 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115280 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115288 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115296 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115304 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115312 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115320 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115328 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115336 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115344 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115352 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114368 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.362963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.362976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.908 [2024-07-25 09:37:31.362986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.908 [2024-07-25 09:37:31.362997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114376 len:8 PRP1 0x0 PRP2 0x0 00:23:09.908 [2024-07-25 09:37:31.363010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.908 [2024-07-25 09:37:31.363022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114384 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114392 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114400 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114408 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114416 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114424 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114432 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114440 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114448 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114456 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114464 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114472 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114480 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114488 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114336 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114496 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114504 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114512 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114520 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.363958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114528 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.363970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.363982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.363993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.364023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.364036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.364047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.364058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114544 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.364070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.364082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.364093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.909 [2024-07-25 09:37:31.364103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114552 len:8 PRP1 0x0 PRP2 0x0 00:23:09.909 [2024-07-25 09:37:31.364115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.909 [2024-07-25 09:37:31.364128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.909 [2024-07-25 09:37:31.364145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114560 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114568 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114576 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114584 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114592 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114600 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114608 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114616 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114632 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114640 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114648 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114656 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114664 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114672 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114680 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.364961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.364972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114696 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.364995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.365008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.365019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.365029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114704 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.365042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.365054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.365065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.910 [2024-07-25 09:37:31.365076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114712 len:8 PRP1 0x0 PRP2 0x0 00:23:09.910 [2024-07-25 09:37:31.365088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.910 [2024-07-25 09:37:31.365101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.910 [2024-07-25 09:37:31.365111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114720 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114728 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114736 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114744 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114752 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114760 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114768 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114776 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114784 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114792 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114800 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114824 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114832 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114840 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114848 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.365957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114856 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.365970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.365983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.365994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.366005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114864 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.366017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.366029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.366040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.366050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114872 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.371986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.372018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.372032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.372045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114880 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.372058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.372071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.372082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.372092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114888 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.372104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.372117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.372128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.372139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114896 len:8 PRP1 0x0 PRP2 0x0 00:23:09.911 [2024-07-25 09:37:31.372152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.911 [2024-07-25 09:37:31.372164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.911 [2024-07-25 09:37:31.372175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.911 [2024-07-25 09:37:31.372186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114904 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114912 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114920 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114928 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114936 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114944 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114952 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114960 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114968 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114976 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114984 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114992 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115000 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115008 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115016 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115024 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115032 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.372968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.372980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.372991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115040 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.373027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.373038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115048 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.373073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.373084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114344 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.373119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.373130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114352 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.373166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.373177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114360 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.373212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.373223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115056 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.912 [2024-07-25 09:37:31.373258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.912 [2024-07-25 09:37:31.373269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.912 [2024-07-25 09:37:31.373279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115064 len:8 PRP1 0x0 PRP2 0x0 00:23:09.912 [2024-07-25 09:37:31.373295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115072 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115080 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115088 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115096 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115104 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115112 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115120 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115128 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115136 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115144 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115152 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115160 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115168 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.373959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115176 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.373971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.373984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.373995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115184 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115192 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115200 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115208 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115216 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115224 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115232 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115240 len:8 PRP1 0x0 PRP2 0x0 00:23:09.913 [2024-07-25 09:37:31.374345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.913 [2024-07-25 09:37:31.374368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.913 [2024-07-25 09:37:31.374381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.913 [2024-07-25 09:37:31.374393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115248 len:8 PRP1 0x0 PRP2 0x0 00:23:09.914 [2024-07-25 09:37:31.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:31.374422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.914 [2024-07-25 09:37:31.374433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.914 [2024-07-25 09:37:31.374444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115256 len:8 PRP1 0x0 PRP2 0x0 00:23:09.914 [2024-07-25 09:37:31.374456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:31.374518] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea7d40 was disconnected and freed. reset controller. 00:23:09.914 [2024-07-25 09:37:31.374536] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:09.914 [2024-07-25 09:37:31.374552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.914 [2024-07-25 09:37:31.374606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe790f0 (9): Bad file descriptor 00:23:09.914 [2024-07-25 09:37:31.377876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.914 [2024-07-25 09:37:31.494923] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:09.914 [2024-07-25 09:37:35.942902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.942945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.942989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.914 [2024-07-25 09:37:35.943632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.914 [2024-07-25 09:37:35.943827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.914 [2024-07-25 09:37:35.943839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.943853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.943867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.943881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.943894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.943908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.943921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.943935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.943956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.943971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.943984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.943999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.944013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.944040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.944067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.944094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.944120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.915 [2024-07-25 09:37:35.944148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.915 [2024-07-25 09:37:35.944985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.915 [2024-07-25 09:37:35.944999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.945983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.945998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.946011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.946026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.946039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.946054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.946067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.946082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.946095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.946110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.916 [2024-07-25 09:37:35.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.916 [2024-07-25 09:37:35.946142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.917 [2024-07-25 09:37:35.946475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.917 [2024-07-25 09:37:35.946507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.917 [2024-07-25 09:37:35.946535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.917 [2024-07-25 09:37:35.946564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.917 [2024-07-25 09:37:35.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.917 [2024-07-25 09:37:35.946620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.917 [2024-07-25 09:37:35.946647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.917 [2024-07-25 09:37:35.946691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.917 [2024-07-25 09:37:35.946703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80400 len:8 PRP1 0x0 PRP2 0x0 00:23:09.917 [2024-07-25 09:37:35.946717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946775] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea9b40 was disconnected and freed. reset controller. 00:23:09.917 [2024-07-25 09:37:35.946793] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:09.917 [2024-07-25 09:37:35.946826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.917 [2024-07-25 09:37:35.946846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.917 [2024-07-25 09:37:35.946874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.917 [2024-07-25 09:37:35.946901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.917 [2024-07-25 09:37:35.946928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.917 [2024-07-25 09:37:35.946941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.917 [2024-07-25 09:37:35.950221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.917 [2024-07-25 09:37:35.950266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe790f0 (9): Bad file descriptor 00:23:09.917 [2024-07-25 09:37:36.099551] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:09.917 00:23:09.917 Latency(us) 00:23:09.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:09.917 Verification LBA range: start 0x0 length 0x4000 00:23:09.917 NVMe0n1 : 15.01 8815.82 34.44 893.72 0.00 13157.04 564.34 23301.69 00:23:09.917 =================================================================================================================== 00:23:09.917 Total : 8815.82 34.44 893.72 0.00 13157.04 564.34 23301.69 00:23:09.917 Received shutdown signal, test time was about 15.000000 seconds 00:23:09.917 00:23:09.917 Latency(us) 00:23:09.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.917 =================================================================================================================== 00:23:09.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=588263 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 588263 /var/tmp/bdevperf.sock 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 588263 ']' 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.917 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:09.917 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.917 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:09.917 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.917 [2024-07-25 09:37:42.349519] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.917 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:09.917 [2024-07-25 09:37:42.598227] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:09.917 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.481 NVMe0n1 00:23:10.481 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.738 00:23:10.738 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.995 00:23:10.995 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.995 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:11.252 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.508 09:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:14.782 09:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.782 09:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:14.782 09:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=588941 00:23:14.782 09:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.782 09:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 588941 00:23:16.153 0 00:23:16.153 09:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.153 [2024-07-25 09:37:41.802068] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:16.153 [2024-07-25 09:37:41.802157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588263 ] 00:23:16.153 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.153 [2024-07-25 09:37:41.863499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.153 [2024-07-25 09:37:41.970056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.153 [2024-07-25 09:37:44.107634] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:16.153 [2024-07-25 09:37:44.107719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.153 [2024-07-25 09:37:44.107756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.153 [2024-07-25 09:37:44.107774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.153 [2024-07-25 09:37:44.107790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.153 [2024-07-25 09:37:44.107804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.153 [2024-07-25 09:37:44.107818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.153 [2024-07-25 09:37:44.107832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.153 [2024-07-25 09:37:44.107845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.153 [2024-07-25 09:37:44.107871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:16.153 [2024-07-25 09:37:44.107914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:16.153 [2024-07-25 09:37:44.107954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24510f0 (9): Bad file descriptor 00:23:16.153 [2024-07-25 09:37:44.118375] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.153 Running I/O for 1 seconds... 00:23:16.153 00:23:16.153 Latency(us) 00:23:16.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.153 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:16.153 Verification LBA range: start 0x0 length 0x4000 00:23:16.153 NVMe0n1 : 1.05 8644.76 33.77 0.00 0.00 14185.63 3009.80 44661.57 00:23:16.153 =================================================================================================================== 00:23:16.153 Total : 8644.76 33.77 0.00 0.00 14185.63 3009.80 44661.57 00:23:16.153 09:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.153 09:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:16.410 09:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.667 09:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.667 09:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:16.924 09:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.182 09:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 588263 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 588263 ']' 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 588263 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.458 09:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 588263 00:23:20.458 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:20.458 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:20.458 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 588263' 00:23:20.458 killing process with pid 588263 00:23:20.458 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 588263 00:23:20.458 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 588263 00:23:20.716 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:20.716 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.973 rmmod nvme_tcp 00:23:20.973 rmmod nvme_fabrics 00:23:20.973 rmmod nvme_keyring 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 585987 ']' 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 585987 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 585987 ']' 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 585987 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 585987 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 585987' 00:23:20.973 killing process with pid 585987 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 585987 00:23:20.973 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 585987 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.231 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:23.762 00:23:23.762 real 0m35.723s 00:23:23.762 user 2m6.241s 00:23:23.762 sys 0m6.188s 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 ************************************ 00:23:23.762 END TEST nvmf_failover 00:23:23.762 ************************************ 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 ************************************ 00:23:23.762 START TEST nvmf_host_discovery 00:23:23.762 ************************************ 00:23:23.762 09:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:23.762 * Looking for test storage... 00:23:23.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:23.762 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.763 09:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:25.712 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:25.712 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:25.712 Found net devices under 0000:82:00.0: cvl_0_0 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:25.712 Found net devices under 0000:82:00.1: cvl_0_1 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.712 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:23:25.713 00:23:25.713 --- 10.0.0.2 ping statistics --- 00:23:25.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.713 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:25.713 00:23:25.713 --- 10.0.0.1 ping statistics --- 00:23:25.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.713 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=591655 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 591655 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 591655 ']' 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.713 09:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.713 [2024-07-25 09:37:58.261289] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:25.713 [2024-07-25 09:37:58.261433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.713 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.713 [2024-07-25 09:37:58.328508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.971 [2024-07-25 09:37:58.445787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.971 [2024-07-25 09:37:58.445841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.971 [2024-07-25 09:37:58.445858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.971 [2024-07-25 09:37:58.445872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.971 [2024-07-25 09:37:58.445885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.971 [2024-07-25 09:37:58.445924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.537 [2024-07-25 09:37:59.213656] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.537 [2024-07-25 09:37:59.221800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.537 null0 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.537 null1 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=591810 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 591810 /tmp/host.sock 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 591810 ']' 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:26.537 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.537 09:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.795 [2024-07-25 09:37:59.294988] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:26.795 [2024-07-25 09:37:59.295070] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591810 ] 00:23:26.795 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.795 [2024-07-25 09:37:59.357508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.795 [2024-07-25 09:37:59.474666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.729 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.730 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.730 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.730 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.987 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 [2024-07-25 09:38:00.557454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:27.988 09:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:28.921 [2024-07-25 09:38:01.343045] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:28.921 [2024-07-25 09:38:01.343074] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:28.921 [2024-07-25 09:38:01.343100] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.921 [2024-07-25 09:38:01.432409] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:28.921 [2024-07-25 09:38:01.536283] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:28.921 [2024-07-25 09:38:01.536310] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.179 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.180 09:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.437 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.437 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.437 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.437 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:29.437 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:29.437 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.438 [2024-07-25 09:38:02.138401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.438 [2024-07-25 09:38:02.139025] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:29.438 [2024-07-25 09:38:02.139072] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.438 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:29.696 [2024-07-25 09:38:02.225807] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:29.696 09:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:29.696 [2024-07-25 09:38:02.287350] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.696 [2024-07-25 09:38:02.287383] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.696 [2024-07-25 09:38:02.287410] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.629 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.630 [2024-07-25 09:38:03.358934] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:30.630 [2024-07-25 09:38:03.358970] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.630 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:30.887 [2024-07-25 09:38:03.365051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.887 [2024-07-25 09:38:03.365086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.887 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.887 [2024-07-25 09:38:03.365113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.887 [2024-07-25 09:38:03.365129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.887 [2024-07-25 09:38:03.365145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.887 [2024-07-25 09:38:03.365161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.887 [2024-07-25 09:38:03.365177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.887 [2024-07-25 09:38:03.365192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.887 [2024-07-25 09:38:03.365207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.887 [2024-07-25 09:38:03.375056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.887 [2024-07-25 09:38:03.385099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:30.887 [2024-07-25 09:38:03.385289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-07-25 09:38:03.385321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa07c20 with addr=10.0.0.2, port=4420 00:23:30.887 [2024-07-25 09:38:03.385349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.887 [2024-07-25 09:38:03.385399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.887 [2024-07-25 09:38:03.385430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.887 [2024-07-25 09:38:03.385444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.887 [2024-07-25 09:38:03.385459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.887 [2024-07-25 09:38:03.385479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.887 [2024-07-25 09:38:03.395180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:30.887 [2024-07-25 09:38:03.395406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-07-25 09:38:03.395435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa07c20 with addr=10.0.0.2, port=4420 00:23:30.887 [2024-07-25 09:38:03.395452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.887 [2024-07-25 09:38:03.395482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.887 [2024-07-25 09:38:03.395515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.887 [2024-07-25 09:38:03.395532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.887 [2024-07-25 09:38:03.395546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.887 [2024-07-25 09:38:03.395566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.887 [2024-07-25 09:38:03.405259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.887 [2024-07-25 09:38:03.405451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.887 [2024-07-25 09:38:03.405481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa07c20 with addr=10.0.0.2, port=4420 00:23:30.888 [2024-07-25 09:38:03.405498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.888 [2024-07-25 09:38:03.405521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.888 [2024-07-25 09:38:03.405541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.888 [2024-07-25 09:38:03.405555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.888 [2024-07-25 09:38:03.405569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.888 [2024-07-25 09:38:03.405588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.888 [2024-07-25 09:38:03.415342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:30.888 [2024-07-25 09:38:03.415495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-07-25 09:38:03.415524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa07c20 with addr=10.0.0.2, port=4420 00:23:30.888 [2024-07-25 09:38:03.415540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.888 [2024-07-25 09:38:03.415562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.888 [2024-07-25 09:38:03.415632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.888 [2024-07-25 09:38:03.415670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.888 [2024-07-25 09:38:03.415686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.888 [2024-07-25 09:38:03.415709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.888 [2024-07-25 09:38:03.425427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:30.888 [2024-07-25 09:38:03.425581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-07-25 09:38:03.425609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa07c20 with addr=10.0.0.2, port=4420 00:23:30.888 [2024-07-25 09:38:03.425625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.888 [2024-07-25 09:38:03.425666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.888 [2024-07-25 09:38:03.425713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.888 [2024-07-25 09:38:03.425732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.888 [2024-07-25 09:38:03.425747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.888 [2024-07-25 09:38:03.425769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.888 [2024-07-25 09:38:03.435501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:30.888 [2024-07-25 09:38:03.435759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-07-25 09:38:03.435790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa07c20 with addr=10.0.0.2, port=4420 00:23:30.888 [2024-07-25 09:38:03.435808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07c20 is same with the state(5) to be set 00:23:30.888 [2024-07-25 09:38:03.435832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa07c20 (9): Bad file descriptor 00:23:30.888 [2024-07-25 09:38:03.435880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.888 [2024-07-25 09:38:03.435901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.888 [2024-07-25 09:38:03.435917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.888 [2024-07-25 09:38:03.435938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.888 [2024-07-25 09:38:03.444900] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:30.888 [2024-07-25 09:38:03.444935] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.888 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.889 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.146 09:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.076 [2024-07-25 09:38:04.732519] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:32.076 [2024-07-25 09:38:04.732541] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:32.076 [2024-07-25 09:38:04.732563] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:32.333 [2024-07-25 09:38:04.820878] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:32.591 [2024-07-25 09:38:05.131074] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:32.591 [2024-07-25 09:38:05.131117] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.591 request: 00:23:32.591 { 00:23:32.591 "name": "nvme", 00:23:32.591 "trtype": "tcp", 00:23:32.591 "traddr": "10.0.0.2", 00:23:32.591 "adrfam": "ipv4", 00:23:32.591 "trsvcid": "8009", 00:23:32.591 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:32.591 "wait_for_attach": true, 00:23:32.591 "method": "bdev_nvme_start_discovery", 00:23:32.591 "req_id": 1 00:23:32.591 } 00:23:32.591 Got JSON-RPC error response 00:23:32.591 response: 00:23:32.591 { 00:23:32.591 "code": -17, 00:23:32.591 "message": "File exists" 00:23:32.591 } 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.591 request: 00:23:32.591 { 00:23:32.591 "name": "nvme_second", 00:23:32.591 "trtype": "tcp", 00:23:32.591 "traddr": "10.0.0.2", 00:23:32.591 "adrfam": "ipv4", 00:23:32.591 "trsvcid": "8009", 00:23:32.591 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:32.591 "wait_for_attach": true, 00:23:32.591 "method": "bdev_nvme_start_discovery", 00:23:32.591 "req_id": 1 00:23:32.591 } 00:23:32.591 Got JSON-RPC error response 00:23:32.591 response: 00:23:32.591 { 00:23:32.591 "code": -17, 00:23:32.591 "message": "File exists" 00:23:32.591 } 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.591 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.592 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.849 09:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.781 [2024-07-25 09:38:06.338599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.781 [2024-07-25 09:38:06.338663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0b030 with addr=10.0.0.2, port=8010 00:23:33.781 [2024-07-25 09:38:06.338686] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:33.781 [2024-07-25 09:38:06.338725] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:33.781 [2024-07-25 09:38:06.338738] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:34.713 [2024-07-25 09:38:07.341011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.713 [2024-07-25 09:38:07.341049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0b030 with addr=10.0.0.2, port=8010 00:23:34.713 [2024-07-25 09:38:07.341072] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:34.713 [2024-07-25 09:38:07.341093] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:34.713 [2024-07-25 09:38:07.341107] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:35.645 [2024-07-25 09:38:08.343200] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:35.645 request: 00:23:35.645 { 00:23:35.645 "name": "nvme_second", 00:23:35.645 "trtype": "tcp", 00:23:35.645 "traddr": "10.0.0.2", 00:23:35.646 "adrfam": "ipv4", 00:23:35.646 "trsvcid": "8010", 00:23:35.646 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:35.646 "wait_for_attach": false, 00:23:35.646 "attach_timeout_ms": 3000, 00:23:35.646 "method": "bdev_nvme_start_discovery", 00:23:35.646 "req_id": 1 00:23:35.646 } 00:23:35.646 Got JSON-RPC error response 00:23:35.646 response: 00:23:35.646 { 00:23:35.646 "code": -110, 00:23:35.646 "message": "Connection timed out" 00:23:35.646 } 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:35.646 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.903 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:35.903 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:35.903 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 591810 00:23:35.903 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:35.903 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.903 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.904 rmmod nvme_tcp 00:23:35.904 rmmod nvme_fabrics 00:23:35.904 rmmod nvme_keyring 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 591655 ']' 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 591655 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 591655 ']' 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 591655 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 591655 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 591655' 00:23:35.904 killing process with pid 591655 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 591655 00:23:35.904 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 591655 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.163 09:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.693 00:23:38.693 real 0m14.833s 00:23:38.693 user 0m22.001s 00:23:38.693 sys 0m2.948s 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.693 ************************************ 00:23:38.693 END TEST nvmf_host_discovery 00:23:38.693 ************************************ 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.693 ************************************ 00:23:38.693 START TEST nvmf_host_multipath_status 00:23:38.693 ************************************ 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:38.693 * Looking for test storage... 00:23:38.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.693 09:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:40.066 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:40.066 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:40.066 Found net devices under 0000:82:00.0: cvl_0_0 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:40.066 Found net devices under 0000:82:00.1: cvl_0_1 00:23:40.066 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.067 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:23:40.323 00:23:40.323 --- 10.0.0.2 ping statistics --- 00:23:40.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.323 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:23:40.323 00:23:40.323 --- 10.0.0.1 ping statistics --- 00:23:40.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.323 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=594966 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:40.323 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 594966 00:23:40.324 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 594966 ']' 00:23:40.324 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.324 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.324 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.324 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.324 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.324 [2024-07-25 09:38:13.004137] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:40.324 [2024-07-25 09:38:13.004229] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.324 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.580 [2024-07-25 09:38:13.069018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:40.580 [2024-07-25 09:38:13.181169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.580 [2024-07-25 09:38:13.181219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.580 [2024-07-25 09:38:13.181248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.580 [2024-07-25 09:38:13.181259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.580 [2024-07-25 09:38:13.181269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.580 [2024-07-25 09:38:13.183379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.580 [2024-07-25 09:38:13.183390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.580 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.580 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:40.580 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.580 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.580 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.836 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.836 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=594966 00:23:40.836 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:40.836 [2024-07-25 09:38:13.552602] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.092 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:41.349 Malloc0 00:23:41.349 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:41.606 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.863 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.120 [2024-07-25 09:38:14.702457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.120 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:42.378 [2024-07-25 09:38:14.943062] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=595251 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 595251 /var/tmp/bdevperf.sock 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 595251 ']' 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.378 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:42.635 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.635 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:42.635 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:42.892 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:43.457 Nvme0n1 00:23:43.457 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:43.713 Nvme0n1 00:23:43.713 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:43.713 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:46.240 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:46.240 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:46.240 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.497 09:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:47.431 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:47.431 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:47.431 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.431 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.688 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.688 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.688 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.689 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.947 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.947 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.947 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.947 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.205 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.205 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.205 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.205 09:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.462 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.462 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.462 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.462 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:49.029 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:49.287 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.851 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:50.797 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:50.798 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.798 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.798 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:51.058 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.058 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:51.058 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.058 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.316 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.316 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.316 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.316 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.574 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.574 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.574 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.574 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.832 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.832 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.832 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.832 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.090 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.090 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.090 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.090 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.347 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.347 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:52.347 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:52.606 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:52.864 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.237 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.495 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.495 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.495 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.495 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.752 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.752 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.752 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.752 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.010 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.010 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.010 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.010 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.268 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.268 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.268 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.268 09:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.527 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.527 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:55.527 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.093 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:56.093 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:57.465 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:57.465 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.465 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.465 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.465 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.465 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:57.465 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.465 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.723 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.723 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.723 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.723 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.981 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.981 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.981 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.981 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.238 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.238 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.238 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.238 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.495 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.495 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:58.495 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.753 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.010 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.010 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:59.010 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:59.267 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:59.531 09:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:00.542 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:00.542 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:00.542 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.542 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.799 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.799 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:00.799 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.799 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.056 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.056 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.056 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.056 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.313 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.313 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.313 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.313 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.571 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.828 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.828 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:01.828 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:02.086 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.343 09:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:03.714 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:03.714 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:03.715 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.715 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.715 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.715 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.715 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.715 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.972 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.972 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.972 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.972 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.229 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.229 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.229 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.229 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.487 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.487 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:04.487 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.487 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.052 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.052 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:05.052 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.052 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.052 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.052 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:05.618 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:05.618 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:05.618 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:06.184 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:07.116 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:07.116 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.117 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.117 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.374 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.374 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:07.374 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.374 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:07.632 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.632 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:07.632 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.632 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.889 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.889 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.889 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.889 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.146 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.146 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.146 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.146 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.403 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.403 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:08.403 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.403 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.660 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.660 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:08.660 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:08.917 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.174 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:10.546 09:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:10.546 09:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.546 09:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.546 09:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.546 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.546 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:10.546 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.546 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.811 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.811 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.811 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.811 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.073 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.073 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.073 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.073 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.331 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.331 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.331 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.331 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.588 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.588 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.588 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.588 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.846 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.846 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:11.846 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.412 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:12.412 09:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.785 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.043 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.043 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.043 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.043 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.301 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.301 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.301 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.301 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.866 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.867 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.433 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.433 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:15.433 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.433 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:15.999 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:16.933 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:16.933 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.933 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.933 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.190 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.190 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:17.190 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.190 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.447 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.447 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.447 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.447 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.705 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.705 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.705 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.705 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.963 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.963 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.963 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.963 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:18.221 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.221 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:18.221 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.221 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 595251 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 595251 ']' 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 595251 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 595251 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 595251' 00:24:18.478 killing process with pid 595251 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 595251 00:24:18.478 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 595251 00:24:18.757 Connection closed with partial response: 00:24:18.757 00:24:18.757 00:24:18.757 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 595251 00:24:18.757 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.758 [2024-07-25 09:38:15.006518] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:24:18.758 [2024-07-25 09:38:15.006606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595251 ] 00:24:18.758 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.758 [2024-07-25 09:38:15.065952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.758 [2024-07-25 09:38:15.185698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.758 Running I/O for 90 seconds... 00:24:18.758 [2024-07-25 09:38:31.785713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.758 [2024-07-25 09:38:31.785767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.785819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.785861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.785878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.785900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.785917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.785939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.785956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.785978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.785994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.786017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.786033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.786056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.786073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.787969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.787991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.758 [2024-07-25 09:38:31.788498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.758 [2024-07-25 09:38:31.788521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.758 [2024-07-25 09:38:31.788537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.788560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.788576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.788598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.788615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.788659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.788676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.788703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.788720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.788742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.788758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.788781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.788798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.789982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.789998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.759 [2024-07-25 09:38:31.790486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.759 [2024-07-25 09:38:31.790502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.790969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.790985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.791927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.791943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.760 [2024-07-25 09:38:31.792693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.760 [2024-07-25 09:38:31.792715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.792758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.792795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.792834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.792881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.761 [2024-07-25 09:38:31.792918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.792960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.792982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.792999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.793962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.793979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.794000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.794016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.794054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.794075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.794091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.794112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.794128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.794149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.761 [2024-07-25 09:38:31.794165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.761 [2024-07-25 09:38:31.794186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.794945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.762 [2024-07-25 09:38:31.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.795021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.795058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.795096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.795133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.795171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.795209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.795980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.796003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.796031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.796048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.762 [2024-07-25 09:38:31.796070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.762 [2024-07-25 09:38:31.796087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.796976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.796992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.763 [2024-07-25 09:38:31.797475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.763 [2024-07-25 09:38:31.797498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.797973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.797989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.764 [2024-07-25 09:38:31.798826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.798849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.798865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.764 [2024-07-25 09:38:31.799682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.764 [2024-07-25 09:38:31.799705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.799971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.799987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.800964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.800987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.801003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.801026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.801042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.765 [2024-07-25 09:38:31.801064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.765 [2024-07-25 09:38:31.801080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.766 [2024-07-25 09:38:31.801667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.801842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.801858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.802970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.802990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.803013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.803030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.803052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.766 [2024-07-25 09:38:31.803068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.766 [2024-07-25 09:38:31.803090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.803967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.803989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.804006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.804028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.804044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.804066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.804083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.804107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.767 [2024-07-25 09:38:31.804123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.767 [2024-07-25 09:38:31.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.804970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.804986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.805262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.805284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.769 [2024-07-25 09:38:31.805300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.769 [2024-07-25 09:38:31.806432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.769 [2024-07-25 09:38:31.806449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.806983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.806999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.770 [2024-07-25 09:38:31.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.770 [2024-07-25 09:38:31.807967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.807984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.771 [2024-07-25 09:38:31.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.808962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.808985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.771 [2024-07-25 09:38:31.809998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.771 [2024-07-25 09:38:31.810020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.810971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.810993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.772 [2024-07-25 09:38:31.811450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.772 [2024-07-25 09:38:31.811472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.811825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.811842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.773 [2024-07-25 09:38:31.812674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.812973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.812990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.773 [2024-07-25 09:38:31.813466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.773 [2024-07-25 09:38:31.813488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.813965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.813994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.774 [2024-07-25 09:38:31.814696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.814795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.814811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.815438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.815461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.815488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.774 [2024-07-25 09:38:31.815505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.774 [2024-07-25 09:38:31.815528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.815968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.815985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.816982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.816999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.817021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.775 [2024-07-25 09:38:31.817038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.775 [2024-07-25 09:38:31.817060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.817966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.817982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.818278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.818294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.819159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.776 [2024-07-25 09:38:31.819203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.819242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.819280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.819318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.776 [2024-07-25 09:38:31.819370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.776 [2024-07-25 09:38:31.819394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.819965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.819987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.777 [2024-07-25 09:38:31.820786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.777 [2024-07-25 09:38:31.820803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.820830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.820847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.820869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.820885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.820908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.820924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.820947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.820963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.820985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.821002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.778 [2024-07-25 09:38:31.828378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.828447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.828466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.778 [2024-07-25 09:38:31.829802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.778 [2024-07-25 09:38:31.829824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.829840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.829862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.829879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.829900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.829917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.829938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.829955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.829977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.829993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.830972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.830988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.831026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.831071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.831109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.831148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.831186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.779 [2024-07-25 09:38:31.831225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.779 [2024-07-25 09:38:31.831247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.831954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.831971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.832784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.832811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.832840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.832858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.832880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.780 [2024-07-25 09:38:31.832897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.832919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.832935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.832957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.832973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.832994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.780 [2024-07-25 09:38:31.833420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.780 [2024-07-25 09:38:31.833441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.833983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.833999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.781 [2024-07-25 09:38:31.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.781 [2024-07-25 09:38:31.834739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.834776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.834793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.834814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.834830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.834853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.834869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.834891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.782 [2024-07-25 09:38:31.834907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.834929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.834945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.835979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.835995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.782 [2024-07-25 09:38:31.836858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.782 [2024-07-25 09:38:31.836880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.836896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.836919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.836934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.836956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.836972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.836994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.837981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.837997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.838019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.838034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.783 [2024-07-25 09:38:31.838056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.783 [2024-07-25 09:38:31.838072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.838377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.838395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.784 [2024-07-25 09:38:31.839331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.839979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.839995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.784 [2024-07-25 09:38:31.840270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.784 [2024-07-25 09:38:31.840292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.840974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.840997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.785 [2024-07-25 09:38:31.841320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.785 [2024-07-25 09:38:31.841963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.785 [2024-07-25 09:38:31.841980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.842969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.842986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.786 [2024-07-25 09:38:31.843471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.786 [2024-07-25 09:38:31.843497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.843973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.843990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:31.844915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:31.844936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.787 [2024-07-25 09:38:48.449578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.787 [2024-07-25 09:38:48.449601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.449900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.449923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.788 [2024-07-25 09:38:48.449948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.788 [2024-07-25 09:38:48.452103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.788 [2024-07-25 09:38:48.452762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.788 [2024-07-25 09:38:48.452800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.788 [2024-07-25 09:38:48.452837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.788 [2024-07-25 09:38:48.452874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.788 [2024-07-25 09:38:48.452912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.788 [2024-07-25 09:38:48.452933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.452949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.452987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.453025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.453066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.453105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.453445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.453737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.453759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.453776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.455755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.455799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.455838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.455883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.455921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.789 [2024-07-25 09:38:48.455960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.455982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.455998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.456020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.456036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.789 [2024-07-25 09:38:48.456058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.789 [2024-07-25 09:38:48.456074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.456649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.456707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.456732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.456748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.457615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.457661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.457717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.457756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.457794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.457832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.457870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.457908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.457946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.457968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.457984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.790 [2024-07-25 09:38:48.458293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.458330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.790 [2024-07-25 09:38:48.458396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.790 [2024-07-25 09:38:48.458421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.458693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.458733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.458770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.458807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.458941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.458957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.459627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.459666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.459720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.459967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.459987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.460010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.460026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.460048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.791 [2024-07-25 09:38:48.460064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.461772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.461803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.461847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.461866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.461888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.461905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.791 [2024-07-25 09:38:48.461927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.791 [2024-07-25 09:38:48.461944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.461966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.461983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.462568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.462629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.462646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.792 [2024-07-25 09:38:48.463513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.463552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.463591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.463635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.463694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.792 [2024-07-25 09:38:48.463717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.792 [2024-07-25 09:38:48.463733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.463771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.463809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.463853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.463890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.463928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.463965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.463987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.464003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.465758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.465802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.465863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.465902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.465939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.465976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.465997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.466732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.466790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.793 [2024-07-25 09:38:48.466806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.470315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.470354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.470407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.470427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.470450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.470467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.470490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.793 [2024-07-25 09:38:48.470506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.793 [2024-07-25 09:38:48.470529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.470964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.470985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.794 [2024-07-25 09:38:48.471689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.794 [2024-07-25 09:38:48.471897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.794 [2024-07-25 09:38:48.471919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.471935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.471956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.471971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.471992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.472981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.472999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.473383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.473426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.473465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.473506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.473958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.473984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.795 [2024-07-25 09:38:48.474373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.795 [2024-07-25 09:38:48.474517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.795 [2024-07-25 09:38:48.474537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.474752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.474883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.474898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.475696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.475755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.475796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.475835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.475871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.475908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.475944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.475965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.475980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.476016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.476053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.476089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.476125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.476161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.476197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.476234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.476281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.476318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.476364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.476384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.477789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.477812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.477853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.477871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.477892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.477908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.477929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.477945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.477966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.477982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.478003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.478018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.478039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-25 09:38:48.478054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.478075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.478091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.478112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.478132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.796 [2024-07-25 09:38:48.478157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.796 [2024-07-25 09:38:48.478174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.478953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.478974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-25 09:38:48.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.481974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.797 [2024-07-25 09:38:48.481990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.797 [2024-07-25 09:38:48.482011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.482929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.482966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.482986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.483002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.483039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.483075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.483112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.483148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.483189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.483226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.483264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.483285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.798 [2024-07-25 09:38:48.483301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.484910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.484934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.484976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.484995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.485017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.485033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.485054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.798 [2024-07-25 09:38:48.485070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.798 [2024-07-25 09:38:48.485092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.485419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.485459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.485976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.485997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.486012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.486048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.486085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.486200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.486236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.486508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.486525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.488200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.488315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.488377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.799 [2024-07-25 09:38:48.488531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.488568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.799 [2024-07-25 09:38:48.488590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.799 [2024-07-25 09:38:48.488606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.488956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.488977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.488992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.489029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.489065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.489254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.489291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.489327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.489432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.489533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.489549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.491703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.491761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.491804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.491843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.491880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.491917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.491953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.491974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.491990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.492026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.492062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.492099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.800 [2024-07-25 09:38:48.492135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.492171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.800 [2024-07-25 09:38:48.492207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.800 [2024-07-25 09:38:48.492228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.492244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.492285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.492321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.492423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.492460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.492497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.492534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.492572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.492609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.492631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.492647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.493582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.493888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.493961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.493982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.493998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.494034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.494070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.494293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.801 [2024-07-25 09:38:48.494908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.494945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.494966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.494981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.495002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.495018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.495039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.495059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.801 [2024-07-25 09:38:48.495081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.801 [2024-07-25 09:38:48.495097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.495819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.495840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.495856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.496472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.496516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.496555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.496593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.496924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.496961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.496982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.802 [2024-07-25 09:38:48.496997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.497018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.497033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.497054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.497070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.497091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.802 [2024-07-25 09:38:48.497110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.802 [2024-07-25 09:38:48.497132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.497147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.498574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.498634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.498725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.498762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.498798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.498834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.498871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.498907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.498943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.498963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.498983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.499533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.499592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.499608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.501957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.501981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.803 [2024-07-25 09:38:48.502539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.803 [2024-07-25 09:38:48.502636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.803 [2024-07-25 09:38:48.502666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.502704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.804 [2024-07-25 09:38:48.502741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.804 [2024-07-25 09:38:48.502777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.502819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.502855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.804 [2024-07-25 09:38:48.502891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.502927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.804 [2024-07-25 09:38:48.502963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.502984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.804 [2024-07-25 09:38:48.503294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.804 [2024-07-25 09:38:48.503316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.804 [2024-07-25 09:38:48.503331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.804 Received shutdown signal, test time was about 34.559880 seconds 00:24:18.804 00:24:18.804 Latency(us) 00:24:18.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.804 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:18.804 Verification LBA range: start 0x0 length 0x4000 00:24:18.804 Nvme0n1 : 34.56 8561.38 33.44 0.00 0.00 14925.87 1626.26 4076242.11 00:24:18.804 =================================================================================================================== 00:24:18.804 Total : 8561.38 33.44 0.00 0.00 14925.87 1626.26 4076242.11 00:24:18.804 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.060 rmmod nvme_tcp 00:24:19.060 rmmod nvme_fabrics 00:24:19.060 rmmod nvme_keyring 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 594966 ']' 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 594966 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 594966 ']' 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 594966 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.060 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594966 00:24:19.317 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.317 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.317 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594966' 00:24:19.317 killing process with pid 594966 00:24:19.317 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 594966 00:24:19.317 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 594966 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.575 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.475 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.475 00:24:21.475 real 0m43.291s 00:24:21.475 user 2m11.222s 00:24:21.475 sys 0m12.066s 00:24:21.475 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.475 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.475 ************************************ 00:24:21.475 END TEST nvmf_host_multipath_status 00:24:21.475 ************************************ 00:24:21.475 09:38:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:21.476 09:38:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:21.476 09:38:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.476 09:38:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.476 ************************************ 00:24:21.476 START TEST nvmf_discovery_remove_ifc 00:24:21.476 ************************************ 00:24:21.476 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:21.733 * Looking for test storage... 00:24:21.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.733 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.734 09:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:23.633 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:23.633 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:23.633 Found net devices under 0000:82:00.0: cvl_0_0 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:23.633 Found net devices under 0000:82:00.1: cvl_0_1 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.633 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.634 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:24:23.892 00:24:23.892 --- 10.0.0.2 ping statistics --- 00:24:23.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.892 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:23.892 00:24:23.892 --- 10.0.0.1 ping statistics --- 00:24:23.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.892 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=601714 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 601714 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 601714 ']' 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.892 09:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.892 [2024-07-25 09:38:56.482932] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:24:23.892 [2024-07-25 09:38:56.483016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.892 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.892 [2024-07-25 09:38:56.545897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.150 [2024-07-25 09:38:56.660988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.150 [2024-07-25 09:38:56.661044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.150 [2024-07-25 09:38:56.661061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.150 [2024-07-25 09:38:56.661075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.150 [2024-07-25 09:38:56.661087] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.150 [2024-07-25 09:38:56.661123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.715 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.973 [2024-07-25 09:38:57.455274] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.973 [2024-07-25 09:38:57.463466] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:24.973 null0 00:24:24.973 [2024-07-25 09:38:57.495424] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=601823 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 601823 /tmp/host.sock 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 601823 ']' 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:24.973 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.973 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.973 [2024-07-25 09:38:57.562483] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:24:24.973 [2024-07-25 09:38:57.562557] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601823 ] 00:24:24.973 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.973 [2024-07-25 09:38:57.620896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.232 [2024-07-25 09:38:57.729482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.232 09:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.604 [2024-07-25 09:38:58.932522] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:26.604 [2024-07-25 09:38:58.932553] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:26.604 [2024-07-25 09:38:58.932576] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:26.604 [2024-07-25 09:38:59.020880] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:26.604 [2024-07-25 09:38:59.246222] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:26.604 [2024-07-25 09:38:59.246293] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:26.604 [2024-07-25 09:38:59.246343] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:26.604 [2024-07-25 09:38:59.246378] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:26.604 [2024-07-25 09:38:59.246431] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.604 [2024-07-25 09:38:59.251502] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d928e0 was disconnected and freed. delete nvme_qpair. 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:26.604 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:26.861 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:26.861 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:26.862 09:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.794 09:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.724 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.724 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.724 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.724 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.724 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.725 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.725 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.725 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.982 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:28.982 09:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.915 09:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:30.849 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.267 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.267 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.267 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.267 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.267 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.267 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.268 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.268 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.268 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.268 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.268 [2024-07-25 09:39:04.687010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:32.268 [2024-07-25 09:39:04.687077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.268 [2024-07-25 09:39:04.687100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.268 [2024-07-25 09:39:04.687119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.268 [2024-07-25 09:39:04.687136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.268 [2024-07-25 09:39:04.687151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.268 [2024-07-25 09:39:04.687167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.268 [2024-07-25 09:39:04.687183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.268 [2024-07-25 09:39:04.687198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.268 [2024-07-25 09:39:04.687215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.268 [2024-07-25 09:39:04.687230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.268 [2024-07-25 09:39:04.687245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59320 is same with the state(5) to be set 00:24:32.268 [2024-07-25 09:39:04.697027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d59320 (9): Bad file descriptor 00:24:32.268 [2024-07-25 09:39:04.707075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.201 [2024-07-25 09:39:05.754435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:33.201 [2024-07-25 09:39:05.754511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d59320 with addr=10.0.0.2, port=4420 00:24:33.201 [2024-07-25 09:39:05.754538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59320 is same with the state(5) to be set 00:24:33.201 [2024-07-25 09:39:05.754590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d59320 (9): Bad file descriptor 00:24:33.201 [2024-07-25 09:39:05.755129] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.201 [2024-07-25 09:39:05.755179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:33.201 [2024-07-25 09:39:05.755199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:33.201 [2024-07-25 09:39:05.755217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:33.201 [2024-07-25 09:39:05.755262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.201 [2024-07-25 09:39:05.755282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.201 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.133 [2024-07-25 09:39:06.757804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:34.133 [2024-07-25 09:39:06.757847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:34.133 [2024-07-25 09:39:06.757864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:34.133 [2024-07-25 09:39:06.757880] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:34.133 [2024-07-25 09:39:06.757909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.133 [2024-07-25 09:39:06.757954] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:34.133 [2024-07-25 09:39:06.758002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.133 [2024-07-25 09:39:06.758026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.133 [2024-07-25 09:39:06.758047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.133 [2024-07-25 09:39:06.758062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.133 [2024-07-25 09:39:06.758079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.133 [2024-07-25 09:39:06.758094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.133 [2024-07-25 09:39:06.758110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.133 [2024-07-25 09:39:06.758134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.133 [2024-07-25 09:39:06.758151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.133 [2024-07-25 09:39:06.758166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.133 [2024-07-25 09:39:06.758181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:34.133 [2024-07-25 09:39:06.758271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58780 (9): Bad file descriptor 00:24:34.133 [2024-07-25 09:39:06.759270] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:34.133 [2024-07-25 09:39:06.759294] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.133 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.134 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.391 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:34.391 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:35.324 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.255 [2024-07-25 09:39:08.810028] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:36.256 [2024-07-25 09:39:08.810059] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:36.256 [2024-07-25 09:39:08.810085] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.256 [2024-07-25 09:39:08.937515] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.256 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.514 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:36.514 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.514 [2024-07-25 09:39:09.000296] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:36.514 [2024-07-25 09:39:09.000377] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:36.514 [2024-07-25 09:39:09.000444] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:36.514 [2024-07-25 09:39:09.000469] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:36.514 [2024-07-25 09:39:09.000483] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:36.514 [2024-07-25 09:39:09.048039] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d7c0d0 was disconnected and freed. delete nvme_qpair. 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.446 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 601823 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 601823 ']' 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 601823 00:24:37.446 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601823 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601823' 00:24:37.447 killing process with pid 601823 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 601823 00:24:37.447 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 601823 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:37.704 rmmod nvme_tcp 00:24:37.704 rmmod nvme_fabrics 00:24:37.704 rmmod nvme_keyring 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 601714 ']' 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 601714 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 601714 ']' 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 601714 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601714 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601714' 00:24:37.704 killing process with pid 601714 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 601714 00:24:37.704 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 601714 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.269 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.169 00:24:40.169 real 0m18.547s 00:24:40.169 user 0m26.839s 00:24:40.169 sys 0m3.025s 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.169 ************************************ 00:24:40.169 END TEST nvmf_discovery_remove_ifc 00:24:40.169 ************************************ 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.169 ************************************ 00:24:40.169 START TEST nvmf_identify_kernel_target 00:24:40.169 ************************************ 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:40.169 * Looking for test storage... 00:24:40.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.169 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.170 09:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:42.700 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:42.700 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:42.700 Found net devices under 0000:82:00.0: cvl_0_0 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:42.700 Found net devices under 0000:82:00.1: cvl_0_1 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.700 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:24:42.700 00:24:42.700 --- 10.0.0.2 ping statistics --- 00:24:42.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.700 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:24:42.701 00:24:42.701 --- 10.0.0.1 ping statistics --- 00:24:42.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.701 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:42.701 09:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:43.634 Waiting for block devices as requested 00:24:43.634 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:24:43.634 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:43.634 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:43.892 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:43.892 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:43.892 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:43.892 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:44.149 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:44.149 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:44.149 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:44.149 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:44.407 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:44.407 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:44.407 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:44.407 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:44.665 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:44.665 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:44.923 No valid GPT data, bailing 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:24:44.923 00:24:44.923 Discovery Log Number of Records 2, Generation counter 2 00:24:44.923 =====Discovery Log Entry 0====== 00:24:44.923 trtype: tcp 00:24:44.923 adrfam: ipv4 00:24:44.923 subtype: current discovery subsystem 00:24:44.923 treq: not specified, sq flow control disable supported 00:24:44.923 portid: 1 00:24:44.923 trsvcid: 4420 00:24:44.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:44.923 traddr: 10.0.0.1 00:24:44.923 eflags: none 00:24:44.923 sectype: none 00:24:44.923 =====Discovery Log Entry 1====== 00:24:44.923 trtype: tcp 00:24:44.923 adrfam: ipv4 00:24:44.923 subtype: nvme subsystem 00:24:44.923 treq: not specified, sq flow control disable supported 00:24:44.923 portid: 1 00:24:44.923 trsvcid: 4420 00:24:44.923 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:44.923 traddr: 10.0.0.1 00:24:44.923 eflags: none 00:24:44.923 sectype: none 00:24:44.923 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:44.923 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:44.923 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.923 ===================================================== 00:24:44.923 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:44.923 ===================================================== 00:24:44.923 Controller Capabilities/Features 00:24:44.923 ================================ 00:24:44.923 Vendor ID: 0000 00:24:44.923 Subsystem Vendor ID: 0000 00:24:44.923 Serial Number: ba99f7c0c569369f14f7 00:24:44.923 Model Number: Linux 00:24:44.923 Firmware Version: 6.7.0-68 00:24:44.924 Recommended Arb Burst: 0 00:24:44.924 IEEE OUI Identifier: 00 00 00 00:24:44.924 Multi-path I/O 00:24:44.924 May have multiple subsystem ports: No 00:24:44.924 May have multiple controllers: No 00:24:44.924 Associated with SR-IOV VF: No 00:24:44.924 Max Data Transfer Size: Unlimited 00:24:44.924 Max Number of Namespaces: 0 00:24:44.924 Max Number of I/O Queues: 1024 00:24:44.924 NVMe Specification Version (VS): 1.3 00:24:44.924 NVMe Specification Version (Identify): 1.3 00:24:44.924 Maximum Queue Entries: 1024 00:24:44.924 Contiguous Queues Required: No 00:24:44.924 Arbitration Mechanisms Supported 00:24:44.924 Weighted Round Robin: Not Supported 00:24:44.924 Vendor Specific: Not Supported 00:24:44.924 Reset Timeout: 7500 ms 00:24:44.924 Doorbell Stride: 4 bytes 00:24:44.924 NVM Subsystem Reset: Not Supported 00:24:44.924 Command Sets Supported 00:24:44.924 NVM Command Set: Supported 00:24:44.924 Boot Partition: Not Supported 00:24:44.924 Memory Page Size Minimum: 4096 bytes 00:24:44.924 Memory Page Size Maximum: 4096 bytes 00:24:44.924 Persistent Memory Region: Not Supported 00:24:44.924 Optional Asynchronous Events Supported 00:24:44.924 Namespace Attribute Notices: Not Supported 00:24:44.924 Firmware Activation Notices: Not Supported 00:24:44.924 ANA Change Notices: Not Supported 00:24:44.924 PLE Aggregate Log Change Notices: Not Supported 00:24:44.924 LBA Status Info Alert Notices: Not Supported 00:24:44.924 EGE Aggregate Log Change Notices: Not Supported 00:24:44.924 Normal NVM Subsystem Shutdown event: Not Supported 00:24:44.924 Zone Descriptor Change Notices: Not Supported 00:24:44.924 Discovery Log Change Notices: Supported 00:24:44.924 Controller Attributes 00:24:44.924 128-bit Host Identifier: Not Supported 00:24:44.924 Non-Operational Permissive Mode: Not Supported 00:24:44.924 NVM Sets: Not Supported 00:24:44.924 Read Recovery Levels: Not Supported 00:24:44.924 Endurance Groups: Not Supported 00:24:44.924 Predictable Latency Mode: Not Supported 00:24:44.924 Traffic Based Keep ALive: Not Supported 00:24:44.924 Namespace Granularity: Not Supported 00:24:44.924 SQ Associations: Not Supported 00:24:44.924 UUID List: Not Supported 00:24:44.924 Multi-Domain Subsystem: Not Supported 00:24:44.924 Fixed Capacity Management: Not Supported 00:24:44.924 Variable Capacity Management: Not Supported 00:24:44.924 Delete Endurance Group: Not Supported 00:24:44.924 Delete NVM Set: Not Supported 00:24:44.924 Extended LBA Formats Supported: Not Supported 00:24:44.924 Flexible Data Placement Supported: Not Supported 00:24:44.924 00:24:44.924 Controller Memory Buffer Support 00:24:44.924 ================================ 00:24:44.924 Supported: No 00:24:44.924 00:24:44.924 Persistent Memory Region Support 00:24:44.924 ================================ 00:24:44.924 Supported: No 00:24:44.924 00:24:44.924 Admin Command Set Attributes 00:24:44.924 ============================ 00:24:44.924 Security Send/Receive: Not Supported 00:24:44.924 Format NVM: Not Supported 00:24:44.924 Firmware Activate/Download: Not Supported 00:24:44.924 Namespace Management: Not Supported 00:24:44.924 Device Self-Test: Not Supported 00:24:44.924 Directives: Not Supported 00:24:44.924 NVMe-MI: Not Supported 00:24:44.924 Virtualization Management: Not Supported 00:24:44.924 Doorbell Buffer Config: Not Supported 00:24:44.924 Get LBA Status Capability: Not Supported 00:24:44.924 Command & Feature Lockdown Capability: Not Supported 00:24:44.924 Abort Command Limit: 1 00:24:44.924 Async Event Request Limit: 1 00:24:44.924 Number of Firmware Slots: N/A 00:24:44.924 Firmware Slot 1 Read-Only: N/A 00:24:44.924 Firmware Activation Without Reset: N/A 00:24:44.924 Multiple Update Detection Support: N/A 00:24:44.924 Firmware Update Granularity: No Information Provided 00:24:44.924 Per-Namespace SMART Log: No 00:24:44.924 Asymmetric Namespace Access Log Page: Not Supported 00:24:44.924 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:44.924 Command Effects Log Page: Not Supported 00:24:44.924 Get Log Page Extended Data: Supported 00:24:44.924 Telemetry Log Pages: Not Supported 00:24:44.924 Persistent Event Log Pages: Not Supported 00:24:44.924 Supported Log Pages Log Page: May Support 00:24:44.924 Commands Supported & Effects Log Page: Not Supported 00:24:44.924 Feature Identifiers & Effects Log Page:May Support 00:24:44.924 NVMe-MI Commands & Effects Log Page: May Support 00:24:44.924 Data Area 4 for Telemetry Log: Not Supported 00:24:44.924 Error Log Page Entries Supported: 1 00:24:44.924 Keep Alive: Not Supported 00:24:44.924 00:24:44.924 NVM Command Set Attributes 00:24:44.924 ========================== 00:24:44.924 Submission Queue Entry Size 00:24:44.924 Max: 1 00:24:44.924 Min: 1 00:24:44.924 Completion Queue Entry Size 00:24:44.924 Max: 1 00:24:44.924 Min: 1 00:24:44.924 Number of Namespaces: 0 00:24:44.924 Compare Command: Not Supported 00:24:44.924 Write Uncorrectable Command: Not Supported 00:24:44.924 Dataset Management Command: Not Supported 00:24:44.924 Write Zeroes Command: Not Supported 00:24:44.924 Set Features Save Field: Not Supported 00:24:44.924 Reservations: Not Supported 00:24:44.924 Timestamp: Not Supported 00:24:44.924 Copy: Not Supported 00:24:44.924 Volatile Write Cache: Not Present 00:24:44.924 Atomic Write Unit (Normal): 1 00:24:44.924 Atomic Write Unit (PFail): 1 00:24:44.924 Atomic Compare & Write Unit: 1 00:24:44.924 Fused Compare & Write: Not Supported 00:24:44.924 Scatter-Gather List 00:24:44.924 SGL Command Set: Supported 00:24:44.924 SGL Keyed: Not Supported 00:24:44.924 SGL Bit Bucket Descriptor: Not Supported 00:24:44.924 SGL Metadata Pointer: Not Supported 00:24:44.924 Oversized SGL: Not Supported 00:24:44.924 SGL Metadata Address: Not Supported 00:24:44.924 SGL Offset: Supported 00:24:44.924 Transport SGL Data Block: Not Supported 00:24:44.924 Replay Protected Memory Block: Not Supported 00:24:44.924 00:24:44.924 Firmware Slot Information 00:24:44.924 ========================= 00:24:44.924 Active slot: 0 00:24:44.924 00:24:44.924 00:24:44.924 Error Log 00:24:44.924 ========= 00:24:44.924 00:24:44.924 Active Namespaces 00:24:44.924 ================= 00:24:44.924 Discovery Log Page 00:24:44.924 ================== 00:24:44.924 Generation Counter: 2 00:24:44.924 Number of Records: 2 00:24:44.924 Record Format: 0 00:24:44.924 00:24:44.924 Discovery Log Entry 0 00:24:44.924 ---------------------- 00:24:44.924 Transport Type: 3 (TCP) 00:24:44.924 Address Family: 1 (IPv4) 00:24:44.924 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:44.924 Entry Flags: 00:24:44.924 Duplicate Returned Information: 0 00:24:44.924 Explicit Persistent Connection Support for Discovery: 0 00:24:44.924 Transport Requirements: 00:24:44.924 Secure Channel: Not Specified 00:24:44.924 Port ID: 1 (0x0001) 00:24:44.924 Controller ID: 65535 (0xffff) 00:24:44.924 Admin Max SQ Size: 32 00:24:44.924 Transport Service Identifier: 4420 00:24:44.924 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:44.924 Transport Address: 10.0.0.1 00:24:44.924 Discovery Log Entry 1 00:24:44.924 ---------------------- 00:24:44.924 Transport Type: 3 (TCP) 00:24:44.924 Address Family: 1 (IPv4) 00:24:44.924 Subsystem Type: 2 (NVM Subsystem) 00:24:44.924 Entry Flags: 00:24:44.924 Duplicate Returned Information: 0 00:24:44.924 Explicit Persistent Connection Support for Discovery: 0 00:24:44.924 Transport Requirements: 00:24:44.924 Secure Channel: Not Specified 00:24:44.924 Port ID: 1 (0x0001) 00:24:44.924 Controller ID: 65535 (0xffff) 00:24:44.924 Admin Max SQ Size: 32 00:24:44.924 Transport Service Identifier: 4420 00:24:44.924 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:44.924 Transport Address: 10.0.0.1 00:24:44.924 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.183 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.183 get_feature(0x01) failed 00:24:45.183 get_feature(0x02) failed 00:24:45.183 get_feature(0x04) failed 00:24:45.183 ===================================================== 00:24:45.183 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:45.183 ===================================================== 00:24:45.183 Controller Capabilities/Features 00:24:45.183 ================================ 00:24:45.183 Vendor ID: 0000 00:24:45.183 Subsystem Vendor ID: 0000 00:24:45.183 Serial Number: dc8ffd9b91561bafb787 00:24:45.183 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:45.183 Firmware Version: 6.7.0-68 00:24:45.183 Recommended Arb Burst: 6 00:24:45.183 IEEE OUI Identifier: 00 00 00 00:24:45.183 Multi-path I/O 00:24:45.183 May have multiple subsystem ports: Yes 00:24:45.183 May have multiple controllers: Yes 00:24:45.183 Associated with SR-IOV VF: No 00:24:45.183 Max Data Transfer Size: Unlimited 00:24:45.183 Max Number of Namespaces: 1024 00:24:45.183 Max Number of I/O Queues: 128 00:24:45.183 NVMe Specification Version (VS): 1.3 00:24:45.183 NVMe Specification Version (Identify): 1.3 00:24:45.183 Maximum Queue Entries: 1024 00:24:45.183 Contiguous Queues Required: No 00:24:45.183 Arbitration Mechanisms Supported 00:24:45.183 Weighted Round Robin: Not Supported 00:24:45.183 Vendor Specific: Not Supported 00:24:45.183 Reset Timeout: 7500 ms 00:24:45.183 Doorbell Stride: 4 bytes 00:24:45.183 NVM Subsystem Reset: Not Supported 00:24:45.183 Command Sets Supported 00:24:45.183 NVM Command Set: Supported 00:24:45.183 Boot Partition: Not Supported 00:24:45.183 Memory Page Size Minimum: 4096 bytes 00:24:45.183 Memory Page Size Maximum: 4096 bytes 00:24:45.183 Persistent Memory Region: Not Supported 00:24:45.183 Optional Asynchronous Events Supported 00:24:45.183 Namespace Attribute Notices: Supported 00:24:45.183 Firmware Activation Notices: Not Supported 00:24:45.183 ANA Change Notices: Supported 00:24:45.183 PLE Aggregate Log Change Notices: Not Supported 00:24:45.183 LBA Status Info Alert Notices: Not Supported 00:24:45.183 EGE Aggregate Log Change Notices: Not Supported 00:24:45.183 Normal NVM Subsystem Shutdown event: Not Supported 00:24:45.183 Zone Descriptor Change Notices: Not Supported 00:24:45.183 Discovery Log Change Notices: Not Supported 00:24:45.183 Controller Attributes 00:24:45.183 128-bit Host Identifier: Supported 00:24:45.183 Non-Operational Permissive Mode: Not Supported 00:24:45.183 NVM Sets: Not Supported 00:24:45.183 Read Recovery Levels: Not Supported 00:24:45.183 Endurance Groups: Not Supported 00:24:45.183 Predictable Latency Mode: Not Supported 00:24:45.183 Traffic Based Keep ALive: Supported 00:24:45.183 Namespace Granularity: Not Supported 00:24:45.183 SQ Associations: Not Supported 00:24:45.183 UUID List: Not Supported 00:24:45.183 Multi-Domain Subsystem: Not Supported 00:24:45.183 Fixed Capacity Management: Not Supported 00:24:45.183 Variable Capacity Management: Not Supported 00:24:45.183 Delete Endurance Group: Not Supported 00:24:45.183 Delete NVM Set: Not Supported 00:24:45.183 Extended LBA Formats Supported: Not Supported 00:24:45.183 Flexible Data Placement Supported: Not Supported 00:24:45.183 00:24:45.183 Controller Memory Buffer Support 00:24:45.183 ================================ 00:24:45.183 Supported: No 00:24:45.183 00:24:45.183 Persistent Memory Region Support 00:24:45.183 ================================ 00:24:45.183 Supported: No 00:24:45.183 00:24:45.183 Admin Command Set Attributes 00:24:45.183 ============================ 00:24:45.183 Security Send/Receive: Not Supported 00:24:45.183 Format NVM: Not Supported 00:24:45.183 Firmware Activate/Download: Not Supported 00:24:45.183 Namespace Management: Not Supported 00:24:45.183 Device Self-Test: Not Supported 00:24:45.183 Directives: Not Supported 00:24:45.183 NVMe-MI: Not Supported 00:24:45.183 Virtualization Management: Not Supported 00:24:45.183 Doorbell Buffer Config: Not Supported 00:24:45.183 Get LBA Status Capability: Not Supported 00:24:45.183 Command & Feature Lockdown Capability: Not Supported 00:24:45.183 Abort Command Limit: 4 00:24:45.183 Async Event Request Limit: 4 00:24:45.183 Number of Firmware Slots: N/A 00:24:45.183 Firmware Slot 1 Read-Only: N/A 00:24:45.183 Firmware Activation Without Reset: N/A 00:24:45.183 Multiple Update Detection Support: N/A 00:24:45.183 Firmware Update Granularity: No Information Provided 00:24:45.183 Per-Namespace SMART Log: Yes 00:24:45.183 Asymmetric Namespace Access Log Page: Supported 00:24:45.183 ANA Transition Time : 10 sec 00:24:45.183 00:24:45.183 Asymmetric Namespace Access Capabilities 00:24:45.183 ANA Optimized State : Supported 00:24:45.183 ANA Non-Optimized State : Supported 00:24:45.183 ANA Inaccessible State : Supported 00:24:45.183 ANA Persistent Loss State : Supported 00:24:45.183 ANA Change State : Supported 00:24:45.183 ANAGRPID is not changed : No 00:24:45.183 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:45.183 00:24:45.183 ANA Group Identifier Maximum : 128 00:24:45.183 Number of ANA Group Identifiers : 128 00:24:45.183 Max Number of Allowed Namespaces : 1024 00:24:45.183 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:45.183 Command Effects Log Page: Supported 00:24:45.183 Get Log Page Extended Data: Supported 00:24:45.183 Telemetry Log Pages: Not Supported 00:24:45.183 Persistent Event Log Pages: Not Supported 00:24:45.183 Supported Log Pages Log Page: May Support 00:24:45.183 Commands Supported & Effects Log Page: Not Supported 00:24:45.183 Feature Identifiers & Effects Log Page:May Support 00:24:45.183 NVMe-MI Commands & Effects Log Page: May Support 00:24:45.183 Data Area 4 for Telemetry Log: Not Supported 00:24:45.183 Error Log Page Entries Supported: 128 00:24:45.183 Keep Alive: Supported 00:24:45.183 Keep Alive Granularity: 1000 ms 00:24:45.184 00:24:45.184 NVM Command Set Attributes 00:24:45.184 ========================== 00:24:45.184 Submission Queue Entry Size 00:24:45.184 Max: 64 00:24:45.184 Min: 64 00:24:45.184 Completion Queue Entry Size 00:24:45.184 Max: 16 00:24:45.184 Min: 16 00:24:45.184 Number of Namespaces: 1024 00:24:45.184 Compare Command: Not Supported 00:24:45.184 Write Uncorrectable Command: Not Supported 00:24:45.184 Dataset Management Command: Supported 00:24:45.184 Write Zeroes Command: Supported 00:24:45.184 Set Features Save Field: Not Supported 00:24:45.184 Reservations: Not Supported 00:24:45.184 Timestamp: Not Supported 00:24:45.184 Copy: Not Supported 00:24:45.184 Volatile Write Cache: Present 00:24:45.184 Atomic Write Unit (Normal): 1 00:24:45.184 Atomic Write Unit (PFail): 1 00:24:45.184 Atomic Compare & Write Unit: 1 00:24:45.184 Fused Compare & Write: Not Supported 00:24:45.184 Scatter-Gather List 00:24:45.184 SGL Command Set: Supported 00:24:45.184 SGL Keyed: Not Supported 00:24:45.184 SGL Bit Bucket Descriptor: Not Supported 00:24:45.184 SGL Metadata Pointer: Not Supported 00:24:45.184 Oversized SGL: Not Supported 00:24:45.184 SGL Metadata Address: Not Supported 00:24:45.184 SGL Offset: Supported 00:24:45.184 Transport SGL Data Block: Not Supported 00:24:45.184 Replay Protected Memory Block: Not Supported 00:24:45.184 00:24:45.184 Firmware Slot Information 00:24:45.184 ========================= 00:24:45.184 Active slot: 0 00:24:45.184 00:24:45.184 Asymmetric Namespace Access 00:24:45.184 =========================== 00:24:45.184 Change Count : 0 00:24:45.184 Number of ANA Group Descriptors : 1 00:24:45.184 ANA Group Descriptor : 0 00:24:45.184 ANA Group ID : 1 00:24:45.184 Number of NSID Values : 1 00:24:45.184 Change Count : 0 00:24:45.184 ANA State : 1 00:24:45.184 Namespace Identifier : 1 00:24:45.184 00:24:45.184 Commands Supported and Effects 00:24:45.184 ============================== 00:24:45.184 Admin Commands 00:24:45.184 -------------- 00:24:45.184 Get Log Page (02h): Supported 00:24:45.184 Identify (06h): Supported 00:24:45.184 Abort (08h): Supported 00:24:45.184 Set Features (09h): Supported 00:24:45.184 Get Features (0Ah): Supported 00:24:45.184 Asynchronous Event Request (0Ch): Supported 00:24:45.184 Keep Alive (18h): Supported 00:24:45.184 I/O Commands 00:24:45.184 ------------ 00:24:45.184 Flush (00h): Supported 00:24:45.184 Write (01h): Supported LBA-Change 00:24:45.184 Read (02h): Supported 00:24:45.184 Write Zeroes (08h): Supported LBA-Change 00:24:45.184 Dataset Management (09h): Supported 00:24:45.184 00:24:45.184 Error Log 00:24:45.184 ========= 00:24:45.184 Entry: 0 00:24:45.184 Error Count: 0x3 00:24:45.184 Submission Queue Id: 0x0 00:24:45.184 Command Id: 0x5 00:24:45.184 Phase Bit: 0 00:24:45.184 Status Code: 0x2 00:24:45.184 Status Code Type: 0x0 00:24:45.184 Do Not Retry: 1 00:24:45.184 Error Location: 0x28 00:24:45.184 LBA: 0x0 00:24:45.184 Namespace: 0x0 00:24:45.184 Vendor Log Page: 0x0 00:24:45.184 ----------- 00:24:45.184 Entry: 1 00:24:45.184 Error Count: 0x2 00:24:45.184 Submission Queue Id: 0x0 00:24:45.184 Command Id: 0x5 00:24:45.184 Phase Bit: 0 00:24:45.184 Status Code: 0x2 00:24:45.184 Status Code Type: 0x0 00:24:45.184 Do Not Retry: 1 00:24:45.184 Error Location: 0x28 00:24:45.184 LBA: 0x0 00:24:45.184 Namespace: 0x0 00:24:45.184 Vendor Log Page: 0x0 00:24:45.184 ----------- 00:24:45.184 Entry: 2 00:24:45.184 Error Count: 0x1 00:24:45.184 Submission Queue Id: 0x0 00:24:45.184 Command Id: 0x4 00:24:45.184 Phase Bit: 0 00:24:45.184 Status Code: 0x2 00:24:45.184 Status Code Type: 0x0 00:24:45.184 Do Not Retry: 1 00:24:45.184 Error Location: 0x28 00:24:45.184 LBA: 0x0 00:24:45.184 Namespace: 0x0 00:24:45.184 Vendor Log Page: 0x0 00:24:45.184 00:24:45.184 Number of Queues 00:24:45.184 ================ 00:24:45.184 Number of I/O Submission Queues: 128 00:24:45.184 Number of I/O Completion Queues: 128 00:24:45.184 00:24:45.184 ZNS Specific Controller Data 00:24:45.184 ============================ 00:24:45.184 Zone Append Size Limit: 0 00:24:45.184 00:24:45.184 00:24:45.184 Active Namespaces 00:24:45.184 ================= 00:24:45.184 get_feature(0x05) failed 00:24:45.184 Namespace ID:1 00:24:45.184 Command Set Identifier: NVM (00h) 00:24:45.184 Deallocate: Supported 00:24:45.184 Deallocated/Unwritten Error: Not Supported 00:24:45.184 Deallocated Read Value: Unknown 00:24:45.184 Deallocate in Write Zeroes: Not Supported 00:24:45.184 Deallocated Guard Field: 0xFFFF 00:24:45.184 Flush: Supported 00:24:45.184 Reservation: Not Supported 00:24:45.184 Namespace Sharing Capabilities: Multiple Controllers 00:24:45.184 Size (in LBAs): 3907029168 (1863GiB) 00:24:45.184 Capacity (in LBAs): 3907029168 (1863GiB) 00:24:45.184 Utilization (in LBAs): 3907029168 (1863GiB) 00:24:45.184 UUID: 0205eed2-4b7e-499c-830c-b1e6a17a23d2 00:24:45.184 Thin Provisioning: Not Supported 00:24:45.184 Per-NS Atomic Units: Yes 00:24:45.184 Atomic Boundary Size (Normal): 0 00:24:45.184 Atomic Boundary Size (PFail): 0 00:24:45.184 Atomic Boundary Offset: 0 00:24:45.184 NGUID/EUI64 Never Reused: No 00:24:45.184 ANA group ID: 1 00:24:45.184 Namespace Write Protected: No 00:24:45.184 Number of LBA Formats: 1 00:24:45.184 Current LBA Format: LBA Format #00 00:24:45.184 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:45.184 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.184 rmmod nvme_tcp 00:24:45.184 rmmod nvme_fabrics 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.184 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:47.716 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:48.648 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:48.648 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:48.648 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:48.649 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:48.649 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:48.649 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:48.649 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:48.649 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:48.649 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:50.553 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:24:50.553 00:24:50.553 real 0m10.316s 00:24:50.553 user 0m1.956s 00:24:50.553 sys 0m3.467s 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.553 ************************************ 00:24:50.553 END TEST nvmf_identify_kernel_target 00:24:50.553 ************************************ 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.553 ************************************ 00:24:50.553 START TEST nvmf_auth_host 00:24:50.553 ************************************ 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:50.553 * Looking for test storage... 00:24:50.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.553 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.554 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:52.455 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:52.455 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:52.455 Found net devices under 0000:82:00.0: cvl_0_0 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:52.455 Found net devices under 0000:82:00.1: cvl_0_1 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.455 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:52.713 00:24:52.713 --- 10.0.0.2 ping statistics --- 00:24:52.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.713 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:52.713 00:24:52.713 --- 10.0.0.1 ping statistics --- 00:24:52.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.713 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=609579 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 609579 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 609579 ']' 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.713 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.647 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.647 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:53.647 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.647 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.647 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cfcbeaf79312ad53ff937bc676431375 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GDM 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cfcbeaf79312ad53ff937bc676431375 0 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cfcbeaf79312ad53ff937bc676431375 0 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cfcbeaf79312ad53ff937bc676431375 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GDM 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GDM 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GDM 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:53.905 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f326ac121cbb733dc3cd3aedb229bc2b29fdae53f92af4251c665d0c7e6c5d8 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HHo 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f326ac121cbb733dc3cd3aedb229bc2b29fdae53f92af4251c665d0c7e6c5d8 3 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f326ac121cbb733dc3cd3aedb229bc2b29fdae53f92af4251c665d0c7e6c5d8 3 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f326ac121cbb733dc3cd3aedb229bc2b29fdae53f92af4251c665d0c7e6c5d8 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HHo 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HHo 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HHo 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=671f6d7e91c734392b5d5f4ef84113e96626151560695042 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AgE 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 671f6d7e91c734392b5d5f4ef84113e96626151560695042 0 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 671f6d7e91c734392b5d5f4ef84113e96626151560695042 0 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=671f6d7e91c734392b5d5f4ef84113e96626151560695042 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AgE 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AgE 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AgE 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=38b46aa45730a7efb102219dabc8042ba83d8ecba8f1fd40 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.P4w 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 38b46aa45730a7efb102219dabc8042ba83d8ecba8f1fd40 2 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 38b46aa45730a7efb102219dabc8042ba83d8ecba8f1fd40 2 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=38b46aa45730a7efb102219dabc8042ba83d8ecba8f1fd40 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.P4w 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.P4w 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.P4w 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a34ea4e632bb2b3c75f85b5d6da4252f 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NOf 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a34ea4e632bb2b3c75f85b5d6da4252f 1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a34ea4e632bb2b3c75f85b5d6da4252f 1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a34ea4e632bb2b3c75f85b5d6da4252f 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NOf 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NOf 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.NOf 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f9deac6c3fc2dc1900f1cbe5ad362d43 00:24:53.906 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gtr 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f9deac6c3fc2dc1900f1cbe5ad362d43 1 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f9deac6c3fc2dc1900f1cbe5ad362d43 1 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f9deac6c3fc2dc1900f1cbe5ad362d43 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gtr 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gtr 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gtr 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9f88ce2cb031fbb7a5d1cb70e83af2263b98ad2a93babff2 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X1y 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9f88ce2cb031fbb7a5d1cb70e83af2263b98ad2a93babff2 2 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9f88ce2cb031fbb7a5d1cb70e83af2263b98ad2a93babff2 2 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:54.164 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9f88ce2cb031fbb7a5d1cb70e83af2263b98ad2a93babff2 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X1y 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X1y 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.X1y 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=408a2b7741e8976396e4aebda37ef371 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gVB 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 408a2b7741e8976396e4aebda37ef371 0 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 408a2b7741e8976396e4aebda37ef371 0 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=408a2b7741e8976396e4aebda37ef371 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gVB 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gVB 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gVB 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2d6e2a2dfc8d37619fbfc77d1075f6814cfb5c7d374b0fbe9aabb306b87fb7a 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xpY 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2d6e2a2dfc8d37619fbfc77d1075f6814cfb5c7d374b0fbe9aabb306b87fb7a 3 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2d6e2a2dfc8d37619fbfc77d1075f6814cfb5c7d374b0fbe9aabb306b87fb7a 3 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2d6e2a2dfc8d37619fbfc77d1075f6814cfb5c7d374b0fbe9aabb306b87fb7a 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xpY 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xpY 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xpY 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 609579 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 609579 ']' 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.165 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GDM 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HHo ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HHo 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AgE 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.P4w ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P4w 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.NOf 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gtr ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gtr 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.X1y 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gVB ]] 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gVB 00:24:54.423 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xpY 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:54.424 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:54.681 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:54.681 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:55.613 Waiting for block devices as requested 00:24:55.613 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:24:55.870 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:55.870 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:56.128 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:56.128 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:56.128 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:56.128 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:56.385 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:56.385 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:56.385 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:56.385 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:56.643 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:56.643 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:56.643 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:56.901 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:56.901 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:56.901 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:57.467 No valid GPT data, bailing 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.467 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:24:57.467 00:24:57.467 Discovery Log Number of Records 2, Generation counter 2 00:24:57.467 =====Discovery Log Entry 0====== 00:24:57.467 trtype: tcp 00:24:57.467 adrfam: ipv4 00:24:57.467 subtype: current discovery subsystem 00:24:57.467 treq: not specified, sq flow control disable supported 00:24:57.467 portid: 1 00:24:57.467 trsvcid: 4420 00:24:57.467 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:57.467 traddr: 10.0.0.1 00:24:57.467 eflags: none 00:24:57.467 sectype: none 00:24:57.467 =====Discovery Log Entry 1====== 00:24:57.467 trtype: tcp 00:24:57.467 adrfam: ipv4 00:24:57.467 subtype: nvme subsystem 00:24:57.467 treq: not specified, sq flow control disable supported 00:24:57.467 portid: 1 00:24:57.467 trsvcid: 4420 00:24:57.467 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:57.467 traddr: 10.0.0.1 00:24:57.467 eflags: none 00:24:57.467 sectype: none 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.467 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 nvme0n1 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.725 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.983 nvme0n1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.983 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.241 nvme0n1 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.241 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.498 nvme0n1 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.498 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.499 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.756 nvme0n1 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.756 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.013 nvme0n1 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.013 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.270 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.527 nvme0n1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.527 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.784 nvme0n1 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.784 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.041 nvme0n1 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.041 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.042 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.298 nvme0n1 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.298 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.555 nvme0n1 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.555 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.119 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.683 nvme0n1 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.683 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.684 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.941 nvme0n1 00:25:01.941 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.941 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.941 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.942 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.506 nvme0n1 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.506 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.506 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 nvme0n1 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.763 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.764 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.327 nvme0n1 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.327 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:03.328 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:03.328 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.328 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.222 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.223 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.481 nvme0n1 00:25:05.481 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.481 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.481 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.481 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.481 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.481 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.482 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.047 nvme0n1 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.047 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.613 nvme0n1 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.613 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.871 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.436 nvme0n1 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.436 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.000 nvme0n1 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.001 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 nvme0n1 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.416 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.417 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.349 nvme0n1 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.349 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.280 nvme0n1 00:25:11.280 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.280 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.280 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.280 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.281 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.211 nvme0n1 00:25:12.211 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.211 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.211 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.211 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.211 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.211 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.468 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.399 nvme0n1 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.399 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.399 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.400 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.657 nvme0n1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.658 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.916 nvme0n1 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.916 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.174 nvme0n1 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.174 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.175 nvme0n1 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.175 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.432 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 nvme0n1 00:25:14.432 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.432 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.432 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.432 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.432 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.690 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.691 nvme0n1 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.691 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:14.948 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.949 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.207 nvme0n1 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.207 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.465 nvme0n1 00:25:15.465 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.465 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.465 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.465 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.465 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.465 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.465 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.465 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.465 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.466 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.724 nvme0n1 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.724 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.981 nvme0n1 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.981 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.982 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 nvme0n1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.240 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.498 nvme0n1 00:25:16.498 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.498 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.498 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.498 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.498 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.755 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.013 nvme0n1 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.013 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.578 nvme0n1 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.578 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.836 nvme0n1 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.836 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.837 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.401 nvme0n1 00:25:18.401 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.401 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.401 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.401 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.401 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.401 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:18.401 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.402 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.967 nvme0n1 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.967 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.225 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.225 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.225 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.789 nvme0n1 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.789 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.790 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.355 nvme0n1 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.355 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.356 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.920 nvme0n1 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.920 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.290 nvme0n1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.290 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.220 nvme0n1 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.220 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.592 nvme0n1 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.592 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.593 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.523 nvme0n1 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.523 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.454 nvme0n1 00:25:26.455 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.455 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.455 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.455 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.455 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.455 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:26.712 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.713 nvme0n1 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.713 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.970 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.971 nvme0n1 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.971 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.229 nvme0n1 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.229 09:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.487 nvme0n1 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.487 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.745 nvme0n1 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.745 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.003 nvme0n1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.003 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.261 nvme0n1 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.261 09:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 nvme0n1 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.528 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.786 nvme0n1 00:25:28.786 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.786 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.786 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.786 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.786 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.786 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.042 nvme0n1 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.042 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.299 09:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.557 nvme0n1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.557 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 nvme0n1 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.815 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.379 nvme0n1 00:25:30.379 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.379 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.379 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.380 09:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.638 nvme0n1 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.638 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.895 nvme0n1 00:25:30.895 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.895 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.895 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.895 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.895 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.152 09:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 nvme0n1 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.717 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.718 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.718 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.718 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.281 nvme0n1 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.281 09:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.845 nvme0n1 00:25:32.845 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.103 09:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.667 nvme0n1 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:33.667 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.668 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.230 nvme0n1 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.230 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.486 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjYmVhZjc5MzEyYWQ1M2ZmOTM3YmM2NzY0MzEzNzVeXomn: 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: ]] 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWYzMjZhYzEyMWNiYjczM2RjM2NkM2FlZGIyMjliYzJiMjlmZGFlNTNmOTJhZjQyNTFjNjY1ZDBjN2U2YzVkOHYVLqc=: 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.487 09:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.419 nvme0n1 00:25:35.419 09:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.419 09:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.419 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.420 09:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.789 nvme0n1 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM0ZWE0ZTYzMmJiMmIzYzc1Zjg1YjVkNmRhNDI1MmZJPjN0: 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjlkZWFjNmMzZmMyZGMxOTAwZjFjYmU1YWQzNjJkNDPb/aOU: 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.789 09:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.720 nvme0n1 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY4OGNlMmNiMDMxZmJiN2E1ZDFjYjcwZTgzYWYyMjYzYjk4YWQyYTkzYmFiZmYyb21ZKg==: 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: ]] 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA4YTJiNzc0MWU4OTc2Mzk2ZTRhZWJkYTM3ZWYzNzGOMJhv: 00:25:37.720 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.721 09:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.650 nvme0n1 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.650 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjJkNmUyYTJkZmM4ZDM3NjE5ZmJmYzc3ZDEwNzVmNjgxNGNmYjVjN2QzNzRiMGZiZTlhYWJiMzA2Yjg3ZmI3Yeq3qaQ=: 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.651 09:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 nvme0n1 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxZjZkN2U5MWM3MzQzOTJiNWQ1ZjRlZjg0MTEzZTk2NjI2MTUxNTYwNjk1MDQy24x1Yw==: 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhiNDZhYTQ1NzMwYTdlZmIxMDIyMTlkYWJjODA0MmJhODNkOGVjYmE4ZjFmZDQw1V7SAA==: 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 request: 00:25:40.029 { 00:25:40.029 "name": "nvme0", 00:25:40.029 "trtype": "tcp", 00:25:40.029 "traddr": "10.0.0.1", 00:25:40.029 "adrfam": "ipv4", 00:25:40.029 "trsvcid": "4420", 00:25:40.029 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.029 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.029 "prchk_reftag": false, 00:25:40.029 "prchk_guard": false, 00:25:40.029 "hdgst": false, 00:25:40.029 "ddgst": false, 00:25:40.029 "method": "bdev_nvme_attach_controller", 00:25:40.029 "req_id": 1 00:25:40.029 } 00:25:40.029 Got JSON-RPC error response 00:25:40.029 response: 00:25:40.029 { 00:25:40.029 "code": -5, 00:25:40.029 "message": "Input/output error" 00:25:40.029 } 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.029 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.030 request: 00:25:40.030 { 00:25:40.030 "name": "nvme0", 00:25:40.030 "trtype": "tcp", 00:25:40.030 "traddr": "10.0.0.1", 00:25:40.030 "adrfam": "ipv4", 00:25:40.030 "trsvcid": "4420", 00:25:40.030 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.030 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.030 "prchk_reftag": false, 00:25:40.030 "prchk_guard": false, 00:25:40.030 "hdgst": false, 00:25:40.030 "ddgst": false, 00:25:40.030 "dhchap_key": "key2", 00:25:40.030 "method": "bdev_nvme_attach_controller", 00:25:40.030 "req_id": 1 00:25:40.030 } 00:25:40.030 Got JSON-RPC error response 00:25:40.030 response: 00:25:40.030 { 00:25:40.030 "code": -5, 00:25:40.030 "message": "Input/output error" 00:25:40.030 } 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.030 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.288 request: 00:25:40.288 { 00:25:40.288 "name": "nvme0", 00:25:40.288 "trtype": "tcp", 00:25:40.288 "traddr": "10.0.0.1", 00:25:40.288 "adrfam": "ipv4", 00:25:40.288 "trsvcid": "4420", 00:25:40.288 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.288 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.288 "prchk_reftag": false, 00:25:40.288 "prchk_guard": false, 00:25:40.288 "hdgst": false, 00:25:40.288 "ddgst": false, 00:25:40.288 "dhchap_key": "key1", 00:25:40.288 "dhchap_ctrlr_key": "ckey2", 00:25:40.288 "method": "bdev_nvme_attach_controller", 00:25:40.288 "req_id": 1 00:25:40.288 } 00:25:40.288 Got JSON-RPC error response 00:25:40.288 response: 00:25:40.288 { 00:25:40.288 "code": -5, 00:25:40.288 "message": "Input/output error" 00:25:40.288 } 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.288 rmmod nvme_tcp 00:25:40.288 rmmod nvme_fabrics 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 609579 ']' 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 609579 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 609579 ']' 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 609579 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 609579 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 609579' 00:25:40.288 killing process with pid 609579 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 609579 00:25:40.288 09:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 609579 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.547 09:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:42.484 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:42.750 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:42.750 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:42.750 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:42.750 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:42.750 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:42.750 09:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:43.683 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:43.941 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:43.941 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:45.842 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:25:45.842 09:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GDM /tmp/spdk.key-null.AgE /tmp/spdk.key-sha256.NOf /tmp/spdk.key-sha384.X1y /tmp/spdk.key-sha512.xpY /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:45.842 09:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.213 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:47.213 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:47.213 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:47.213 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:47.213 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:47.213 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:47.213 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:47.213 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:47.213 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:47.213 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:47.213 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:47.213 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:47.213 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:47.213 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:47.213 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:47.213 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:47.213 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:47.213 00:25:47.213 real 0m56.563s 00:25:47.213 user 0m53.513s 00:25:47.213 sys 0m5.949s 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.213 ************************************ 00:25:47.213 END TEST nvmf_auth_host 00:25:47.213 ************************************ 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.213 ************************************ 00:25:47.213 START TEST nvmf_digest 00:25:47.213 ************************************ 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:47.213 * Looking for test storage... 00:25:47.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:47.213 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.214 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:49.112 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.112 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:49.113 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:49.113 Found net devices under 0000:82:00.0: cvl_0_0 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.113 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:49.371 Found net devices under 0000:82:00.1: cvl_0_1 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:25:49.371 00:25:49.371 --- 10.0.0.2 ping statistics --- 00:25:49.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.371 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:25:49.371 00:25:49.371 --- 10.0.0.1 ping statistics --- 00:25:49.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.371 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.371 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:49.371 ************************************ 00:25:49.371 START TEST nvmf_digest_clean 00:25:49.371 ************************************ 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=619732 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 619732 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 619732 ']' 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.371 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:49.371 [2024-07-25 09:40:22.068411] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:49.371 [2024-07-25 09:40:22.068501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.371 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.629 [2024-07-25 09:40:22.137448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.629 [2024-07-25 09:40:22.256490] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.629 [2024-07-25 09:40:22.256554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.629 [2024-07-25 09:40:22.256579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.629 [2024-07-25 09:40:22.256593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.629 [2024-07-25 09:40:22.256605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.629 [2024-07-25 09:40:22.256636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.560 null0 00:25:50.560 [2024-07-25 09:40:23.192213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.560 [2024-07-25 09:40:23.216450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=619886 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 619886 /var/tmp/bperf.sock 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 619886 ']' 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.560 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.560 [2024-07-25 09:40:23.267082] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:50.560 [2024-07-25 09:40:23.267148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619886 ] 00:25:50.828 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.828 [2024-07-25 09:40:23.330038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.828 [2024-07-25 09:40:23.445594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.828 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.828 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:50.828 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:50.828 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:50.828 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:51.090 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.090 09:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.654 nvme0n1 00:25:51.654 09:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:51.654 09:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.654 Running I/O for 2 seconds... 00:25:54.181 00:25:54.181 Latency(us) 00:25:54.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.181 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:54.181 nvme0n1 : 2.00 18560.95 72.50 0.00 0.00 6886.77 3713.71 24369.68 00:25:54.181 =================================================================================================================== 00:25:54.181 Total : 18560.95 72.50 0.00 0.00 6886.77 3713.71 24369.68 00:25:54.181 0 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:54.181 | select(.opcode=="crc32c") 00:25:54.181 | "\(.module_name) \(.executed)"' 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 619886 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 619886 ']' 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 619886 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 619886 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 619886' 00:25:54.181 killing process with pid 619886 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 619886 00:25:54.181 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.181 00:25:54.181 Latency(us) 00:25:54.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.181 =================================================================================================================== 00:25:54.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.181 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 619886 00:25:54.438 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:54.438 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:54.438 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:54.438 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:54.438 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=620292 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 620292 /var/tmp/bperf.sock 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 620292 ']' 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.439 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:54.439 [2024-07-25 09:40:27.022812] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:54.439 [2024-07-25 09:40:27.022912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620292 ] 00:25:54.439 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:54.439 Zero copy mechanism will not be used. 00:25:54.439 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.439 [2024-07-25 09:40:27.086274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.696 [2024-07-25 09:40:27.205059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.696 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.696 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:54.696 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:54.696 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:54.696 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:54.953 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.954 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.518 nvme0n1 00:25:55.518 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:55.518 09:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:55.518 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.518 Zero copy mechanism will not be used. 00:25:55.519 Running I/O for 2 seconds... 00:25:57.416 00:25:57.416 Latency(us) 00:25:57.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.416 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:57.416 nvme0n1 : 2.00 4700.28 587.54 0.00 0.00 3399.65 843.47 12379.02 00:25:57.416 =================================================================================================================== 00:25:57.416 Total : 4700.28 587.54 0.00 0.00 3399.65 843.47 12379.02 00:25:57.416 0 00:25:57.416 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:57.416 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:57.416 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:57.416 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:57.416 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:57.416 | select(.opcode=="crc32c") 00:25:57.416 | "\(.module_name) \(.executed)"' 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 620292 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 620292 ']' 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 620292 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:57.673 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620292 00:25:57.930 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:57.930 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:57.930 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620292' 00:25:57.930 killing process with pid 620292 00:25:57.930 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 620292 00:25:57.930 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.930 00:25:57.930 Latency(us) 00:25:57.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.930 =================================================================================================================== 00:25:57.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.930 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 620292 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=620700 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 620700 /var/tmp/bperf.sock 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 620700 ']' 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.187 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:58.187 [2024-07-25 09:40:30.712656] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:58.187 [2024-07-25 09:40:30.712723] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620700 ] 00:25:58.187 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.187 [2024-07-25 09:40:30.769368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.187 [2024-07-25 09:40:30.875400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.444 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.444 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:58.444 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:58.444 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:58.444 09:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:58.701 09:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.701 09:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.958 nvme0n1 00:25:58.959 09:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:58.959 09:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.216 Running I/O for 2 seconds... 00:26:01.114 00:26:01.114 Latency(us) 00:26:01.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.114 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:01.114 nvme0n1 : 2.01 20429.58 79.80 0.00 0.00 6259.19 3228.25 11747.93 00:26:01.114 =================================================================================================================== 00:26:01.114 Total : 20429.58 79.80 0.00 0.00 6259.19 3228.25 11747.93 00:26:01.114 0 00:26:01.114 09:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:01.114 09:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:01.114 09:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:01.114 09:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:01.114 09:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:01.114 | select(.opcode=="crc32c") 00:26:01.114 | "\(.module_name) \(.executed)"' 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 620700 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 620700 ']' 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 620700 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620700 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620700' 00:26:01.370 killing process with pid 620700 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 620700 00:26:01.370 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.370 00:26:01.370 Latency(us) 00:26:01.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.370 =================================================================================================================== 00:26:01.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.370 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 620700 00:26:01.628 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:01.628 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:01.628 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:01.628 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:01.628 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=621240 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 621240 /var/tmp/bperf.sock 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 621240 ']' 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:01.886 [2024-07-25 09:40:34.405343] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:01.886 [2024-07-25 09:40:34.405447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621240 ] 00:26:01.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.886 Zero copy mechanism will not be used. 00:26:01.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.886 [2024-07-25 09:40:34.467148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.886 [2024-07-25 09:40:34.580112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:01.886 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:02.451 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.451 09:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.709 nvme0n1 00:26:02.709 09:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:02.709 09:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.709 Zero copy mechanism will not be used. 00:26:02.709 Running I/O for 2 seconds... 00:26:05.237 00:26:05.237 Latency(us) 00:26:05.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.237 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:05.237 nvme0n1 : 2.00 5176.09 647.01 0.00 0.00 3083.46 2318.03 8252.68 00:26:05.237 =================================================================================================================== 00:26:05.237 Total : 5176.09 647.01 0.00 0.00 3083.46 2318.03 8252.68 00:26:05.237 0 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:05.237 | select(.opcode=="crc32c") 00:26:05.237 | "\(.module_name) \(.executed)"' 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 621240 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 621240 ']' 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 621240 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 621240 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 621240' 00:26:05.237 killing process with pid 621240 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 621240 00:26:05.237 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.237 00:26:05.237 Latency(us) 00:26:05.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.237 =================================================================================================================== 00:26:05.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 621240 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 619732 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 619732 ']' 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 619732 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.237 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 619732 00:26:05.495 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.495 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.495 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 619732' 00:26:05.495 killing process with pid 619732 00:26:05.495 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 619732 00:26:05.495 09:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 619732 00:26:05.754 00:26:05.754 real 0m16.261s 00:26:05.754 user 0m30.878s 00:26:05.754 sys 0m5.177s 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.754 ************************************ 00:26:05.754 END TEST nvmf_digest_clean 00:26:05.754 ************************************ 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:05.754 ************************************ 00:26:05.754 START TEST nvmf_digest_error 00:26:05.754 ************************************ 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=621678 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 621678 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 621678 ']' 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.754 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.754 [2024-07-25 09:40:38.381821] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:05.754 [2024-07-25 09:40:38.381896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.754 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.754 [2024-07-25 09:40:38.445675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.014 [2024-07-25 09:40:38.556065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.014 [2024-07-25 09:40:38.556128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.014 [2024-07-25 09:40:38.556142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.014 [2024-07-25 09:40:38.556153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.014 [2024-07-25 09:40:38.556162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.014 [2024-07-25 09:40:38.556188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.014 [2024-07-25 09:40:38.612722] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.014 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.014 null0 00:26:06.014 [2024-07-25 09:40:38.729593] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.272 [2024-07-25 09:40:38.753818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=621822 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 621822 /var/tmp/bperf.sock 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 621822 ']' 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.272 09:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.272 [2024-07-25 09:40:38.798653] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:06.272 [2024-07-25 09:40:38.798742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621822 ] 00:26:06.272 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.272 [2024-07-25 09:40:38.856311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.272 [2024-07-25 09:40:38.964299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.529 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.529 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:06.529 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.529 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.786 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:06.786 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.786 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.786 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.786 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.786 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.043 nvme0n1 00:26:07.043 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:07.043 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.043 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.043 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.043 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:07.043 09:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.301 Running I/O for 2 seconds... 00:26:07.301 [2024-07-25 09:40:39.896215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.896269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.896290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.908201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.908240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.908268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.924146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.924181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.924201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.939700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.939729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.939759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.953618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.953649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.953681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.965233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.965267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.965286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.979261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.979295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.979314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:39.991619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:39.991648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:39.991682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:40.008848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:40.008907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:40.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:40.023857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:40.023890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:40.023906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.301 [2024-07-25 09:40:40.034106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.301 [2024-07-25 09:40:40.034154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.301 [2024-07-25 09:40:40.034172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.048784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.048816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.048833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.063294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.063325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.063365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.076281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.076316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.076337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.093367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.093415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.093432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.105608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.105637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.105668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.122186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.122222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.122241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.140184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.140220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.140240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.157247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.157282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.157311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.168479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.168510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.168527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.184923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.184977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.203610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.203638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.203655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.220593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.220622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.220638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.232754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.232789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.248590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.248645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.265062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.265097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.265117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.559 [2024-07-25 09:40:40.276786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.559 [2024-07-25 09:40:40.276820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.559 [2024-07-25 09:40:40.276840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.816 [2024-07-25 09:40:40.292720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.816 [2024-07-25 09:40:40.292760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.816 [2024-07-25 09:40:40.292780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.816 [2024-07-25 09:40:40.310208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.816 [2024-07-25 09:40:40.310243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.816 [2024-07-25 09:40:40.310261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.816 [2024-07-25 09:40:40.327364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.816 [2024-07-25 09:40:40.327411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.816 [2024-07-25 09:40:40.327428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.342778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.342813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.342832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.354927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.354962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.354981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.372198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.372233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.372253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.389870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.389905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.405661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.405710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.422612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.422641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.422657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.435496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.435525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.435541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.452916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.452952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.452971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.466792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.466827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.466845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.478724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.478758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.478777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.492284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.492318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.507562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.507590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.507605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.519426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.519454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.519469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.537078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.537112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.537132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.817 [2024-07-25 09:40:40.550428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:07.817 [2024-07-25 09:40:40.550459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.817 [2024-07-25 09:40:40.550482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.563185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.563220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.563239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.578057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.578092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.578111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.589298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.589332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.589352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.602923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.602958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.602977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.615466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.615495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.615512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.630841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.630877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.630896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.645063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.645092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.645108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.657801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.657829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.657845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.673785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.673818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.673835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.684826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.075 [2024-07-25 09:40:40.684855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.075 [2024-07-25 09:40:40.684870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.075 [2024-07-25 09:40:40.700291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.700320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.700336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.712185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.712212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.712228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.722726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.722755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.722770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.735140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.735176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.735192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.745494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.745525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.745542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.759151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.759180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.759196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.772180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.772209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.772225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.784605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.784654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.784670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.076 [2024-07-25 09:40:40.795413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.076 [2024-07-25 09:40:40.795442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.076 [2024-07-25 09:40:40.795459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.334 [2024-07-25 09:40:40.811702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.334 [2024-07-25 09:40:40.811747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.334 [2024-07-25 09:40:40.811764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.334 [2024-07-25 09:40:40.827605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.334 [2024-07-25 09:40:40.827635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.334 [2024-07-25 09:40:40.827667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.334 [2024-07-25 09:40:40.841527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.334 [2024-07-25 09:40:40.841558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.334 [2024-07-25 09:40:40.841574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.334 [2024-07-25 09:40:40.852262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.334 [2024-07-25 09:40:40.852292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.334 [2024-07-25 09:40:40.852308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.334 [2024-07-25 09:40:40.863989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.334 [2024-07-25 09:40:40.864019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.334 [2024-07-25 09:40:40.864035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.334 [2024-07-25 09:40:40.876787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.334 [2024-07-25 09:40:40.876816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.334 [2024-07-25 09:40:40.876832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.887684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.887713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.887735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.900792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.900821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.900839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.912943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.912972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.912987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.923602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.923646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.923664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.938108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.938136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.938152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.948561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.948592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.948608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.961389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.961427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.961444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.975813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.975841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.975872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:40.991087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:40.991115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:40.991146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:41.005122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:41.005149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:41.005179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:41.016429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:41.016457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:41.016473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:41.028957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:41.028985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:41.029024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:41.040391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:41.040421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:41.040437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:41.052303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:41.052331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:41.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.335 [2024-07-25 09:40:41.064501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.335 [2024-07-25 09:40:41.064532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.335 [2024-07-25 09:40:41.064548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.075401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.075431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.593 [2024-07-25 09:40:41.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.087695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.087738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.593 [2024-07-25 09:40:41.087760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.099909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.099937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.593 [2024-07-25 09:40:41.099973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.115250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.115278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.593 [2024-07-25 09:40:41.115309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.129121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.129148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.593 [2024-07-25 09:40:41.129178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.140134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.140161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.593 [2024-07-25 09:40:41.140192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.593 [2024-07-25 09:40:41.153255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.593 [2024-07-25 09:40:41.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.164966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.164993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.165024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.176315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.176363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.176381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.191367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.191410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.191426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.206507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.206537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.222038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.222072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.222103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.235752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.235779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.235810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.247169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.247196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.247226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.261683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.261711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.261741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.276890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.276917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.276947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.292894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.292922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.292953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.303128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.303155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.303185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.594 [2024-07-25 09:40:41.318533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.594 [2024-07-25 09:40:41.318563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.594 [2024-07-25 09:40:41.318578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.333831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.333859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.333890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.348150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.348178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.348209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.363920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.363949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.363979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.380799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.380827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.380858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.391115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.391143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.391173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.404795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.404823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.404853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.419131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.419158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.419188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.430601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.852 [2024-07-25 09:40:41.430629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-07-25 09:40:41.430645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-07-25 09:40:41.445096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.445123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.445155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.460267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.460295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.460338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.471579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.471608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.471624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.486102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.486129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.486160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.498626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.498655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.498670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.509479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.509507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.509523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.523153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.523181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.523211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.539174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.539201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.539232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.553916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.553944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.553975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.569165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.569195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.569225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-07-25 09:40:41.585628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:08.853 [2024-07-25 09:40:41.585666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-07-25 09:40:41.585699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.111 [2024-07-25 09:40:41.601270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.111 [2024-07-25 09:40:41.601304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.111 [2024-07-25 09:40:41.601322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.111 [2024-07-25 09:40:41.618816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.111 [2024-07-25 09:40:41.618850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.111 [2024-07-25 09:40:41.618868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.111 [2024-07-25 09:40:41.632032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.111 [2024-07-25 09:40:41.632067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.111 [2024-07-25 09:40:41.632085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.111 [2024-07-25 09:40:41.648307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.111 [2024-07-25 09:40:41.648341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.111 [2024-07-25 09:40:41.648368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.665935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.665969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.665987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.677615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.677660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.677679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.694190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.694224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.694242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.708697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.708744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.708763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.721747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.721780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.721798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.739010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.739044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.739063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.751137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.751171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.751190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.766626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.766654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.766669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.784323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.784365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.784387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.796019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.796052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.796071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.810802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.810836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.810855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.824584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.824628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-07-25 09:40:41.838022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.112 [2024-07-25 09:40:41.838055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-07-25 09:40:41.838080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.369 [2024-07-25 09:40:41.850532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.369 [2024-07-25 09:40:41.850561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.369 [2024-07-25 09:40:41.850577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.369 [2024-07-25 09:40:41.863479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.370 [2024-07-25 09:40:41.863507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.370 [2024-07-25 09:40:41.863522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.370 [2024-07-25 09:40:41.876539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd3bcb0) 00:26:09.370 [2024-07-25 09:40:41.876568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.370 [2024-07-25 09:40:41.876584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.370 00:26:09.370 Latency(us) 00:26:09.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.370 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:09.370 nvme0n1 : 2.01 18206.81 71.12 0.00 0.00 7020.09 3398.16 23204.60 00:26:09.370 =================================================================================================================== 00:26:09.370 Total : 18206.81 71.12 0.00 0.00 7020.09 3398.16 23204.60 00:26:09.370 0 00:26:09.370 09:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:09.370 09:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:09.370 09:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:09.370 | .driver_specific 00:26:09.370 | .nvme_error 00:26:09.370 | .status_code 00:26:09.370 | .command_transient_transport_error' 00:26:09.370 09:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 621822 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 621822 ']' 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 621822 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 621822 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 621822' 00:26:09.628 killing process with pid 621822 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 621822 00:26:09.628 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.628 00:26:09.628 Latency(us) 00:26:09.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.628 =================================================================================================================== 00:26:09.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.628 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 621822 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=622228 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 622228 /var/tmp/bperf.sock 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 622228 ']' 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.886 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.886 [2024-07-25 09:40:42.512561] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:09.886 [2024-07-25 09:40:42.512643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622228 ] 00:26:09.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.886 Zero copy mechanism will not be used. 00:26:09.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.886 [2024-07-25 09:40:42.572951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.144 [2024-07-25 09:40:42.683710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.144 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.144 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:10.144 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.144 09:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.400 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:10.400 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.400 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:10.400 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.400 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.400 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.964 nvme0n1 00:26:10.964 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:10.964 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.964 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:10.964 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.964 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:10.964 09:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.964 Zero copy mechanism will not be used. 00:26:10.964 Running I/O for 2 seconds... 00:26:10.964 [2024-07-25 09:40:43.613201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.964 [2024-07-25 09:40:43.613256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.964 [2024-07-25 09:40:43.613278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.964 [2024-07-25 09:40:43.619602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.964 [2024-07-25 09:40:43.619647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.964 [2024-07-25 09:40:43.619664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.964 [2024-07-25 09:40:43.625836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.964 [2024-07-25 09:40:43.625871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.964 [2024-07-25 09:40:43.625890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.964 [2024-07-25 09:40:43.631793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.964 [2024-07-25 09:40:43.631828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.964 [2024-07-25 09:40:43.631847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.964 [2024-07-25 09:40:43.637664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.964 [2024-07-25 09:40:43.637706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.964 [2024-07-25 09:40:43.637723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.964 [2024-07-25 09:40:43.643693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.643728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.643747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.649646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.649689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.649705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.656581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.656610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.656642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.664230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.664264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.664283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.671150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.671184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.671203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.678767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.678801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.678820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.686414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.686472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.965 [2024-07-25 09:40:43.693912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:10.965 [2024-07-25 09:40:43.693945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.965 [2024-07-25 09:40:43.693964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.701762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.701795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.701820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.709431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.709458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.709489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.716949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.716982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.717000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.724112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.724147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.724166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.731298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.731330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.731349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.738444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.738501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.745460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.745488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.745518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.753234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.753268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.753287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.760721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.760754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.760773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.768469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.768503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.768536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.777744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.777779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.777798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.786760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.786797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.786817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.797224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.797260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.797279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.806264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.806299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.806319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.816291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.816327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.816346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.826053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.826088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.826107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.835823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.835858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.835878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.845184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.845219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.845238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.854035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.854071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.854090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.863381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.863424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.863440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.872831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.872865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.872884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.882178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.882212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.882230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.223 [2024-07-25 09:40:43.890263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.223 [2024-07-25 09:40:43.890296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-07-25 09:40:43.890315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.898161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.898195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.898214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.905871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.905905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.905924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.912659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.912707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.912727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.917580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.917607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.917643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.925229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.925262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.925281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.933427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.933456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.933487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.941537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.941566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.941597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.224 [2024-07-25 09:40:43.949413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.224 [2024-07-25 09:40:43.949442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-07-25 09:40:43.949472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:43.957324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:43.957366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:43.957387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:43.965254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:43.965289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:43.965308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:43.973036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:43.973069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:43.973089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:43.981501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:43.981531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:43.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:43.990222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:43.990257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:43.990276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:43.997250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:43.997283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:43.997301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.004875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.004907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.004926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.012620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.012665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.012683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.020293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.020345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.028610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.028639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.028668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.036891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.036925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.036944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.044724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.044758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.044777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.052721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.052755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.482 [2024-07-25 09:40:44.052784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.482 [2024-07-25 09:40:44.061119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.482 [2024-07-25 09:40:44.061153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.061172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.069254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.069290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.069310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.076561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.076604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.076631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.084322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.084366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.092819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.092854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.092873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.099692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.099726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.099745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.106587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.106615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.106631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.114467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.114495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.123220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.123261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.123280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.131775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.131810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.131829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.140091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.140125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.140144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.149552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.149582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.149614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.159407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.159436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.159452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.168728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.168763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.168782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.178539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.178568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.178600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.188071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.188105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.188124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.197763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.197798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.197818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.206852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.206886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.206905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.483 [2024-07-25 09:40:44.214713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.483 [2024-07-25 09:40:44.214748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.483 [2024-07-25 09:40:44.214767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.223040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.223074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.223093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.229962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.229996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.230015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.235884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.235918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.241973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.242007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.242025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.247912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.247945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.247964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.253642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.253671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.253703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.259458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.259487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.259524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.265411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.265440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.265471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.271649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.271692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.271707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.278239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.278279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.278299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.284508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.284537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.284568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.291017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.291051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.291069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.296866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.296900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.296919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.302845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.302879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.302898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.308599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.308627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.308661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.314393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.314443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.314460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.320467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.320495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.320525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.326422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.326451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.326481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.332402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.332429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.332444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.338624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.338653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.338669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.344788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.344821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.344840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.350329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.350380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.350397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.356080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.742 [2024-07-25 09:40:44.356107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.742 [2024-07-25 09:40:44.356138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.742 [2024-07-25 09:40:44.361689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.361718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.361750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.367179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.367207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.367238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.373034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.373061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.373092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.378769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.378797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.378827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.385098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.385132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.385150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.393134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.393168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.393187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.400754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.400789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.400808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.408590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.408618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.408634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.415765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.415800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.415819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.423074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.423115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.423142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.431010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.431045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.431064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.437521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.437550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.437582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.443981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.444015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.444034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.450679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.450725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.450745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.457933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.457969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.457988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.465801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.465836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.465855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.743 [2024-07-25 09:40:44.473334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:11.743 [2024-07-25 09:40:44.473377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.743 [2024-07-25 09:40:44.473412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.001 [2024-07-25 09:40:44.482202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.001 [2024-07-25 09:40:44.482247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.001 [2024-07-25 09:40:44.482266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.001 [2024-07-25 09:40:44.489659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.001 [2024-07-25 09:40:44.489702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.001 [2024-07-25 09:40:44.489722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.001 [2024-07-25 09:40:44.496424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.001 [2024-07-25 09:40:44.496452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.001 [2024-07-25 09:40:44.496483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.001 [2024-07-25 09:40:44.503535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.001 [2024-07-25 09:40:44.503563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.001 [2024-07-25 09:40:44.503594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.001 [2024-07-25 09:40:44.511040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.001 [2024-07-25 09:40:44.511074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.001 [2024-07-25 09:40:44.511094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.001 [2024-07-25 09:40:44.518467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.001 [2024-07-25 09:40:44.518495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.518528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.525414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.525442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.525471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.532490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.532517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.532548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.539543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.539571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.539610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.545765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.545798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.545823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.551717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.551751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.551770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.557400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.557428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.557459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.563149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.563177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.563208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.568892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.568925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.568956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.575085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.575112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.575143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.581111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.581137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.581167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.586634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.586675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.586691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.593380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.593408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.600912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.600944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.600976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.610091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.610121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.610153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.618480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.618510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.618542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.626945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.626974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.627005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.635095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.635134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.635166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.644632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.644678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.644694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.653397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.653427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.653460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.661118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.661146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.661178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.668460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.668504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.668531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.676105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.676140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.676171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.683680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.683708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.683739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.690925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.690952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.690982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.699021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.699049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.699079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.707380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.707408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.707440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.715147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.715182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.715213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.722520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.722549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.722582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.002 [2024-07-25 09:40:44.730394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.002 [2024-07-25 09:40:44.730423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.002 [2024-07-25 09:40:44.730456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.260 [2024-07-25 09:40:44.738294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.260 [2024-07-25 09:40:44.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.260 [2024-07-25 09:40:44.738349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.260 [2024-07-25 09:40:44.746313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.746362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.746381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.753761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.753790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.753822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.761327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.761377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.761394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.768973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.769001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.769030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.773816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.773844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.773876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.780714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.780742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.780758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.789499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.789530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.789562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.797451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.797481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.797514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.805230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.805266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.805299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.814449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.814478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.814510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.823023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.823052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.823083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.830907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.830950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.830966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.838912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.838940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.838971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.847779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.847807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.856540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.856570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.856602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.865673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.865702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.865718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.872824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.872852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.872888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.880541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.880570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.880601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.888905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.888933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.888963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.896392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.896420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.896451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.903512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.903541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.903574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.910717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.910744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.910775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.917794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.917821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.917851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.925068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.925095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.925125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.932307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.932334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.932374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.939629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.939657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.939694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.261 [2024-07-25 09:40:44.946990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.261 [2024-07-25 09:40:44.947017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.261 [2024-07-25 09:40:44.947048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.262 [2024-07-25 09:40:44.954600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.262 [2024-07-25 09:40:44.954629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.262 [2024-07-25 09:40:44.954662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.262 [2024-07-25 09:40:44.962103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.262 [2024-07-25 09:40:44.962130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.262 [2024-07-25 09:40:44.962161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.262 [2024-07-25 09:40:44.969758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.262 [2024-07-25 09:40:44.969786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.262 [2024-07-25 09:40:44.969816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.262 [2024-07-25 09:40:44.977248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.262 [2024-07-25 09:40:44.977275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.262 [2024-07-25 09:40:44.977306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.262 [2024-07-25 09:40:44.984331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.262 [2024-07-25 09:40:44.984379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.262 [2024-07-25 09:40:44.984396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.262 [2024-07-25 09:40:44.992010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.262 [2024-07-25 09:40:44.992039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.262 [2024-07-25 09:40:44.992070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:44.999577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:44.999606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:44.999638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.006895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.006922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.006953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.014084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.014111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.014142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.021439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.021468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.021501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.028842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.028870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.028900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.036139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.036165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.036197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.043569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.043637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.051046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.051075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.051106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.058036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.058064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.058094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.065170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.065197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.065234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.072625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.072686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.079921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.079949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.079980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.086175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.086234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.092748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.092777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.092814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.099479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.099523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.099540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.106022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.106060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.106091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.112688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.112716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.112747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.119215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.119256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.119273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.126459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.126494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.126527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.133690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.133750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.140683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.140727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.521 [2024-07-25 09:40:45.140742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.521 [2024-07-25 09:40:45.147611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.521 [2024-07-25 09:40:45.147660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.147676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.154830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.154858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.154898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.162372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.162401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.162443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.170785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.170814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.170845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.178190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.178226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.178259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.185086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.185114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.185146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.192410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.192439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.192470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.201431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.201461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.201494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.209871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.209900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.209931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.217547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.217577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.217608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.224709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.224738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.224770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.234394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.234456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.241581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.241625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.241643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.522 [2024-07-25 09:40:45.246501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.522 [2024-07-25 09:40:45.246529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.522 [2024-07-25 09:40:45.246561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.255372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.255428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.255453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.263045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.263073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.270421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.270449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.270481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.277647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.277690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.277705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.284856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.284914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.292154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.292181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.292211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.299363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.299406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.306380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.306408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.306440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.313745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.313773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.313803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.320741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.320773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.320805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.327987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.328014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.328044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.335396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.335440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.335457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.342539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.342580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.342598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.349872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.349900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.349931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.357072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.357098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.357129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.364255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.364282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.364312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.371797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.371826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.371857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.379174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.379201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.379231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.386078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.386107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.386139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.391946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.391974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.392005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.398062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.398090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.398122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.404852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.781 [2024-07-25 09:40:45.404881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.781 [2024-07-25 09:40:45.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.781 [2024-07-25 09:40:45.410486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.410516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.410549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.416560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.416588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.416621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.422918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.422947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.422977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.429802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.429831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.429862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.435773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.435802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.435840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.442436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.442465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.442497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.448646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.448691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.448706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.454931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.454959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.454991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.461951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.461979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.462010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.468354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.468408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.468425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.474689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.474719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.474751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.481277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.481305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.481337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.487501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.487531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.487563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.494483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.494514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.494545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.501193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.501223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.501255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.782 [2024-07-25 09:40:45.508352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:12.782 [2024-07-25 09:40:45.508391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.782 [2024-07-25 09:40:45.508423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.514318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.514374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.514393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.521476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.521508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.521542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.528161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.528190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.528222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.535132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.535161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.535194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.542540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.542569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.542606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.547749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.547784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.547811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.556508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.556536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.556568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.564855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.564891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.564910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.573319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.573354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.573408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.582046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.582082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.582101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.590134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.590168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.590187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.598191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.598227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.598247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.605926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.605979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.040 [2024-07-25 09:40:45.613330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22e7290) 00:26:13.040 [2024-07-25 09:40:45.613372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.040 [2024-07-25 09:40:45.613393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.040 00:26:13.040 Latency(us) 00:26:13.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.040 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:13.040 nvme0n1 : 2.00 4207.34 525.92 0.00 0.00 3796.85 958.77 10631.40 00:26:13.041 =================================================================================================================== 00:26:13.041 Total : 4207.34 525.92 0.00 0.00 3796.85 958.77 10631.40 00:26:13.041 0 00:26:13.041 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:13.041 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:13.041 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:13.041 | .driver_specific 00:26:13.041 | .nvme_error 00:26:13.041 | .status_code 00:26:13.041 | .command_transient_transport_error' 00:26:13.041 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 622228 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 622228 ']' 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 622228 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622228 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622228' 00:26:13.298 killing process with pid 622228 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 622228 00:26:13.298 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.298 00:26:13.298 Latency(us) 00:26:13.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.298 =================================================================================================================== 00:26:13.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.298 09:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 622228 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=622638 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 622638 /var/tmp/bperf.sock 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 622638 ']' 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.556 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.556 [2024-07-25 09:40:46.223084] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:13.556 [2024-07-25 09:40:46.223170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622638 ] 00:26:13.556 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.556 [2024-07-25 09:40:46.284084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.814 [2024-07-25 09:40:46.396633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.814 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.814 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:13.814 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:13.814 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.071 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.071 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.071 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.071 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.071 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.071 09:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.636 nvme0n1 00:26:14.636 09:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.636 09:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.636 09:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.636 09:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.636 09:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.636 09:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.636 Running I/O for 2 seconds... 00:26:14.636 [2024-07-25 09:40:47.265190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6458 00:26:14.636 [2024-07-25 09:40:47.266380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.266437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.278054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e95a0 00:26:14.636 [2024-07-25 09:40:47.278667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.278696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.290665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e7c50 00:26:14.636 [2024-07-25 09:40:47.292004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.292038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.303202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8618 00:26:14.636 [2024-07-25 09:40:47.305064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.305095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.312340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eff18 00:26:14.636 [2024-07-25 09:40:47.313198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.313229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.327786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3e60 00:26:14.636 [2024-07-25 09:40:47.329329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.329369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.339601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e0ea0 00:26:14.636 [2024-07-25 09:40:47.340783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.340821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:14.636 [2024-07-25 09:40:47.352135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fe720 00:26:14.636 [2024-07-25 09:40:47.353439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.636 [2024-07-25 09:40:47.353469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:14.637 [2024-07-25 09:40:47.365387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fc998 00:26:14.637 [2024-07-25 09:40:47.366802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.637 [2024-07-25 09:40:47.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:14.894 [2024-07-25 09:40:47.378846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f2948 00:26:14.894 [2024-07-25 09:40:47.380445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.894 [2024-07-25 09:40:47.380472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:14.894 [2024-07-25 09:40:47.391765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e8088 00:26:14.894 [2024-07-25 09:40:47.393398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.894 [2024-07-25 09:40:47.393423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.894 [2024-07-25 09:40:47.403623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e95a0 00:26:14.894 [2024-07-25 09:40:47.404776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.894 [2024-07-25 09:40:47.404814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.894 [2024-07-25 09:40:47.415262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1430 00:26:14.894 [2024-07-25 09:40:47.416367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.894 [2024-07-25 09:40:47.416421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:14.894 [2024-07-25 09:40:47.428208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190de8a8 00:26:14.894 [2024-07-25 09:40:47.429367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.894 [2024-07-25 09:40:47.429417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:14.894 [2024-07-25 09:40:47.441264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eaab8 00:26:14.894 [2024-07-25 09:40:47.442548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.894 [2024-07-25 09:40:47.442573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.455396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ebfd0 00:26:14.895 [2024-07-25 09:40:47.456891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.456923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.468552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8618 00:26:14.895 [2024-07-25 09:40:47.470189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.470219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.478956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e99d8 00:26:14.895 [2024-07-25 09:40:47.479898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.479928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.490710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f20d8 00:26:14.895 [2024-07-25 09:40:47.491634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.491675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.503950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eaab8 00:26:14.895 [2024-07-25 09:40:47.505042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.505073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.517279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1430 00:26:14.895 [2024-07-25 09:40:47.518489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.518516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.530500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fb480 00:26:14.895 [2024-07-25 09:40:47.531924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.531955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.543787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f7da8 00:26:14.895 [2024-07-25 09:40:47.545380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.545423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.557164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eaab8 00:26:14.895 [2024-07-25 09:40:47.558983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.559014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.570482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ebfd0 00:26:14.895 [2024-07-25 09:40:47.572424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.572450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.583993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e7c50 00:26:14.895 [2024-07-25 09:40:47.586169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.586201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.593031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3e60 00:26:14.895 [2024-07-25 09:40:47.593981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.594017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.605041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6cc8 00:26:14.895 [2024-07-25 09:40:47.605982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.606013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:14.895 [2024-07-25 09:40:47.618336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e27f0 00:26:14.895 [2024-07-25 09:40:47.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.895 [2024-07-25 09:40:47.619464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.153 [2024-07-25 09:40:47.631840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f81e0 00:26:15.153 [2024-07-25 09:40:47.633090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.153 [2024-07-25 09:40:47.633121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.153 [2024-07-25 09:40:47.645073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1ca0 00:26:15.153 [2024-07-25 09:40:47.646530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.153 [2024-07-25 09:40:47.646555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.153 [2024-07-25 09:40:47.658370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fc560 00:26:15.153 [2024-07-25 09:40:47.659971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.153 [2024-07-25 09:40:47.660002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.153 [2024-07-25 09:40:47.671616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e27f0 00:26:15.153 [2024-07-25 09:40:47.673433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.673460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.684845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fdeb0 00:26:15.154 [2024-07-25 09:40:47.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.686816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.698137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eb760 00:26:15.154 [2024-07-25 09:40:47.700286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.700317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.707160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3a28 00:26:15.154 [2024-07-25 09:40:47.708052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.708090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.720220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f7970 00:26:15.154 [2024-07-25 09:40:47.721116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.721148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.733526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f92c0 00:26:15.154 [2024-07-25 09:40:47.734607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.747100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e4de8 00:26:15.154 [2024-07-25 09:40:47.748533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.748559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.760453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f5378 00:26:15.154 [2024-07-25 09:40:47.762046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.762078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.773585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f4298 00:26:15.154 [2024-07-25 09:40:47.775440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.775493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.785100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f5be8 00:26:15.154 [2024-07-25 09:40:47.786530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.786555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.797682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e7c50 00:26:15.154 [2024-07-25 09:40:47.799139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.809647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e95a0 00:26:15.154 [2024-07-25 09:40:47.810806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.810836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.823419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ddc00 00:26:15.154 [2024-07-25 09:40:47.824898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.824929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.835749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3e60 00:26:15.154 [2024-07-25 09:40:47.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.836717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.848292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f4b08 00:26:15.154 [2024-07-25 09:40:47.849337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.849376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.862707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ee190 00:26:15.154 [2024-07-25 09:40:47.864518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.864549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.154 [2024-07-25 09:40:47.876019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6458 00:26:15.154 [2024-07-25 09:40:47.877967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.154 [2024-07-25 09:40:47.877998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.889442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e6b70 00:26:15.412 [2024-07-25 09:40:47.891589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.891619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.898512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fc560 00:26:15.412 [2024-07-25 09:40:47.899414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.899439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.911712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190feb58 00:26:15.412 [2024-07-25 09:40:47.912749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.912783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.924264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e0630 00:26:15.412 [2024-07-25 09:40:47.925455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.925480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.937511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fdeb0 00:26:15.412 [2024-07-25 09:40:47.938855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.938896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.950746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ee5c8 00:26:15.412 [2024-07-25 09:40:47.952294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.952326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.964030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f7da8 00:26:15.412 [2024-07-25 09:40:47.965794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.965825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.976882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6020 00:26:15.412 [2024-07-25 09:40:47.978604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.978630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:47.988902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e88f8 00:26:15.412 [2024-07-25 09:40:47.990509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:47.990534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:48.000530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eea00 00:26:15.412 [2024-07-25 09:40:48.002426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:48.002451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:48.012297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190df118 00:26:15.412 [2024-07-25 09:40:48.013234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:48.013264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:48.024934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f35f0 00:26:15.412 [2024-07-25 09:40:48.025889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:48.025921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:48.036862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fef90 00:26:15.412 [2024-07-25 09:40:48.037740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:48.037777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:48.050163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ea248 00:26:15.412 [2024-07-25 09:40:48.051232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.412 [2024-07-25 09:40:48.051269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.412 [2024-07-25 09:40:48.063461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e0a68 00:26:15.412 [2024-07-25 09:40:48.064690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.064721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.413 [2024-07-25 09:40:48.076719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fc560 00:26:15.413 [2024-07-25 09:40:48.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.078163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.413 [2024-07-25 09:40:48.088506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e27f0 00:26:15.413 [2024-07-25 09:40:48.089433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.089459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.413 [2024-07-25 09:40:48.101451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f5be8 00:26:15.413 [2024-07-25 09:40:48.102204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.102234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.413 [2024-07-25 09:40:48.116017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f7970 00:26:15.413 [2024-07-25 09:40:48.117760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.117791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.413 [2024-07-25 09:40:48.129420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e5658 00:26:15.413 [2024-07-25 09:40:48.131326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.413 [2024-07-25 09:40:48.142883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eb760 00:26:15.413 [2024-07-25 09:40:48.145012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.413 [2024-07-25 09:40:48.145042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.670 [2024-07-25 09:40:48.152054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8e88 00:26:15.670 [2024-07-25 09:40:48.153010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.670 [2024-07-25 09:40:48.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.670 [2024-07-25 09:40:48.164127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eb328 00:26:15.671 [2024-07-25 09:40:48.165041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.165071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.177362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e49b0 00:26:15.671 [2024-07-25 09:40:48.178433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.178458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.191486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ef6a8 00:26:15.671 [2024-07-25 09:40:48.192754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.192786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.203271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190efae0 00:26:15.671 [2024-07-25 09:40:48.204517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.204543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.216608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e2c28 00:26:15.671 [2024-07-25 09:40:48.218031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.218063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.229983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190df988 00:26:15.671 [2024-07-25 09:40:48.231564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.231615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.243507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f35f0 00:26:15.671 [2024-07-25 09:40:48.245234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.245265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.256743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f57b0 00:26:15.671 [2024-07-25 09:40:48.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.258688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.270003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ebb98 00:26:15.671 [2024-07-25 09:40:48.272107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.272137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.279110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fcdd0 00:26:15.671 [2024-07-25 09:40:48.279991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.280017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.292525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ecc78 00:26:15.671 [2024-07-25 09:40:48.293771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.293801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.306660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e27f0 00:26:15.671 [2024-07-25 09:40:48.308122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.308152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.318543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8e88 00:26:15.671 [2024-07-25 09:40:48.319982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.320012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.331913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e38d0 00:26:15.671 [2024-07-25 09:40:48.333506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.333530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.345190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f5be8 00:26:15.671 [2024-07-25 09:40:48.346961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.346991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.358536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e8d30 00:26:15.671 [2024-07-25 09:40:48.360466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.360492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.371815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3e60 00:26:15.671 [2024-07-25 09:40:48.373881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.373926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.380811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ee5c8 00:26:15.671 [2024-07-25 09:40:48.381685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.381730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.671 [2024-07-25 09:40:48.394020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fb048 00:26:15.671 [2024-07-25 09:40:48.395136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.671 [2024-07-25 09:40:48.395167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.406294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f2948 00:26:15.931 [2024-07-25 09:40:48.407571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.407598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.421026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ee190 00:26:15.931 [2024-07-25 09:40:48.422336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.422374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.434283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fcdd0 00:26:15.931 [2024-07-25 09:40:48.435737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.435769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.446441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ecc78 00:26:15.931 [2024-07-25 09:40:48.447859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.447890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.459449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ff3c8 00:26:15.931 [2024-07-25 09:40:48.460900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.460927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.471329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ed920 00:26:15.931 [2024-07-25 09:40:48.472957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.472982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.483261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190de470 00:26:15.931 [2024-07-25 09:40:48.485048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.485074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.495055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fb048 00:26:15.931 [2024-07-25 09:40:48.496925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.496949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.503013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eff18 00:26:15.931 [2024-07-25 09:40:48.503897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.503922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.515863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f7538 00:26:15.931 [2024-07-25 09:40:48.517290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.517315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.526379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ecc78 00:26:15.931 [2024-07-25 09:40:48.527350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.536831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6cc8 00:26:15.931 [2024-07-25 09:40:48.537886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.537912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.549469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e6b70 00:26:15.931 [2024-07-25 09:40:48.550624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.550664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.561055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ecc78 00:26:15.931 [2024-07-25 09:40:48.562266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.562291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.573814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e4140 00:26:15.931 [2024-07-25 09:40:48.575691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.575717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.581880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8618 00:26:15.931 [2024-07-25 09:40:48.582669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.582694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.593027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8a50 00:26:15.931 [2024-07-25 09:40:48.594020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.594044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.605591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e9e10 00:26:15.931 [2024-07-25 09:40:48.606749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.606774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.616042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f7da8 00:26:15.931 [2024-07-25 09:40:48.617137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.931 [2024-07-25 09:40:48.617162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.931 [2024-07-25 09:40:48.627851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ea248 00:26:15.932 [2024-07-25 09:40:48.629131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.932 [2024-07-25 09:40:48.629156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.932 [2024-07-25 09:40:48.639754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3e60 00:26:15.932 [2024-07-25 09:40:48.641134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.932 [2024-07-25 09:40:48.641159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.932 [2024-07-25 09:40:48.650206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6cc8 00:26:15.932 [2024-07-25 09:40:48.651172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.932 [2024-07-25 09:40:48.651197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.932 [2024-07-25 09:40:48.663003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f8a50 00:26:16.190 [2024-07-25 09:40:48.664666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.664694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.675232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f9b30 00:26:16.190 [2024-07-25 09:40:48.676948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.676977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.685635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f5378 00:26:16.190 [2024-07-25 09:40:48.686889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.686914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.695910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1ca0 00:26:16.190 [2024-07-25 09:40:48.697511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.697537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.705540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f20d8 00:26:16.190 [2024-07-25 09:40:48.706349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.706395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.717317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e99d8 00:26:16.190 [2024-07-25 09:40:48.718311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.718336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.729851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f46d0 00:26:16.190 [2024-07-25 09:40:48.730986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.731010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.740437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fe2e8 00:26:16.190 [2024-07-25 09:40:48.741541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.741567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.752177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e7818 00:26:16.190 [2024-07-25 09:40:48.753396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.753437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.762580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190df550 00:26:16.190 [2024-07-25 09:40:48.763381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.763406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.772969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190eaab8 00:26:16.190 [2024-07-25 09:40:48.773779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.773810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.784859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f57b0 00:26:16.190 [2024-07-25 09:40:48.785805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.785830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:16.190 [2024-07-25 09:40:48.796760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e99d8 00:26:16.190 [2024-07-25 09:40:48.797870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.190 [2024-07-25 09:40:48.797897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.809250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fb8b8 00:26:16.191 [2024-07-25 09:40:48.810542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.810569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.820836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1430 00:26:16.191 [2024-07-25 09:40:48.822192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.822218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.830436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f4b08 00:26:16.191 [2024-07-25 09:40:48.831053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.831077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.844007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190dfdc0 00:26:16.191 [2024-07-25 09:40:48.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.845736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.855863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f4f40 00:26:16.191 [2024-07-25 09:40:48.857845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.857872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.864064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fc998 00:26:16.191 [2024-07-25 09:40:48.864888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.864913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.875794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ecc78 00:26:16.191 [2024-07-25 09:40:48.876764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.876789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.887692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f0bc0 00:26:16.191 [2024-07-25 09:40:48.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.888785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.899422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1ca0 00:26:16.191 [2024-07-25 09:40:48.900804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.900830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.911116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3a28 00:26:16.191 [2024-07-25 09:40:48.912639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.191 [2024-07-25 09:40:48.912679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.191 [2024-07-25 09:40:48.922683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e23b8 00:26:16.451 [2024-07-25 09:40:48.924375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.924402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.931484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190de038 00:26:16.451 [2024-07-25 09:40:48.932301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.932329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.943186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e73e0 00:26:16.451 [2024-07-25 09:40:48.944148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.944173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.954899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f31b8 00:26:16.451 [2024-07-25 09:40:48.955985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.956011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.966267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e95a0 00:26:16.451 [2024-07-25 09:40:48.967372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.967410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.977250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190dece0 00:26:16.451 [2024-07-25 09:40:48.977921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.977946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.991237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ed0b0 00:26:16.451 [2024-07-25 09:40:48.993063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:48.993088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:48.999176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f20d8 00:26:16.451 [2024-07-25 09:40:49.000133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.000158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.012571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190ec408 00:26:16.451 [2024-07-25 09:40:49.014014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.014039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.023225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e12d8 00:26:16.451 [2024-07-25 09:40:49.024619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.024644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.035188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f5378 00:26:16.451 [2024-07-25 09:40:49.036793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.036818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.046967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e1b48 00:26:16.451 [2024-07-25 09:40:49.048772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.048800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.058760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190feb58 00:26:16.451 [2024-07-25 09:40:49.060592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.060618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.066585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e8d30 00:26:16.451 [2024-07-25 09:40:49.067401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.067433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.078352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e23b8 00:26:16.451 [2024-07-25 09:40:49.079515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.079542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.090086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190efae0 00:26:16.451 [2024-07-25 09:40:49.091320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.091365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.101859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f6458 00:26:16.451 [2024-07-25 09:40:49.103237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.103263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.113263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e3d08 00:26:16.451 [2024-07-25 09:40:49.114249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.114274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.124233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fa3a0 00:26:16.451 [2024-07-25 09:40:49.125573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.125599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.135456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f1430 00:26:16.451 [2024-07-25 09:40:49.136576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.136602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.147196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e5220 00:26:16.451 [2024-07-25 09:40:49.148470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.148496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.159106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e9e10 00:26:16.451 [2024-07-25 09:40:49.160750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.160775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.170917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fa3a0 00:26:16.451 [2024-07-25 09:40:49.172710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.451 [2024-07-25 09:40:49.172735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:16.451 [2024-07-25 09:40:49.183015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190e9168 00:26:16.747 [2024-07-25 09:40:49.185004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.185033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.747 [2024-07-25 09:40:49.191459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fb480 00:26:16.747 [2024-07-25 09:40:49.192376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.192403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:16.747 [2024-07-25 09:40:49.203392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f81e0 00:26:16.747 [2024-07-25 09:40:49.204307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.204333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:16.747 [2024-07-25 09:40:49.215430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190f3a28 00:26:16.747 [2024-07-25 09:40:49.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.216509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:16.747 [2024-07-25 09:40:49.226331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fbcf0 00:26:16.747 [2024-07-25 09:40:49.227369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.227395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:16.747 [2024-07-25 09:40:49.239009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190fa7d8 00:26:16.747 [2024-07-25 09:40:49.240275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.240300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:16.747 [2024-07-25 09:40:49.249691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf6ff30) with pdu=0x2000190dece0 00:26:16.747 [2024-07-25 09:40:49.250862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.747 [2024-07-25 09:40:49.250887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:16.747 00:26:16.747 Latency(us) 00:26:16.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.747 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:16.747 nvme0n1 : 2.00 21093.19 82.40 0.00 0.00 6059.72 2827.76 16408.27 00:26:16.747 =================================================================================================================== 00:26:16.747 Total : 21093.19 82.40 0.00 0.00 6059.72 2827.76 16408.27 00:26:16.747 0 00:26:16.747 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.747 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.747 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.747 | .driver_specific 00:26:16.747 | .nvme_error 00:26:16.747 | .status_code 00:26:16.747 | .command_transient_transport_error' 00:26:16.747 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 622638 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 622638 ']' 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 622638 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622638 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622638' 00:26:17.027 killing process with pid 622638 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 622638 00:26:17.027 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.027 00:26:17.027 Latency(us) 00:26:17.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.027 =================================================================================================================== 00:26:17.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.027 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 622638 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=623048 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 623048 /var/tmp/bperf.sock 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 623048 ']' 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.284 09:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.284 [2024-07-25 09:40:49.878693] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:17.284 [2024-07-25 09:40:49.878773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623048 ] 00:26:17.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.284 Zero copy mechanism will not be used. 00:26:17.284 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.284 [2024-07-25 09:40:49.939435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.542 [2024-07-25 09:40:50.060858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.542 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.542 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:17.542 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.542 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.800 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.800 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.800 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.800 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.800 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.800 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.058 nvme0n1 00:26:18.316 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:18.316 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.316 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.316 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.317 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:18.317 09:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.317 Zero copy mechanism will not be used. 00:26:18.317 Running I/O for 2 seconds... 00:26:18.317 [2024-07-25 09:40:50.926666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.927041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.927086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.934676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.935029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.935058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.942642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.942952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.942980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.949870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.950165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.950193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.957941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.958233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.958261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.966291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.966608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.966638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.974565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.974878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.982936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.983261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.983289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.991294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.991674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.991719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:50.999380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:50.999716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:50.999751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.006082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.006397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.006425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.012681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.012967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.012995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.019366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.019683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.019711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.026004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.026375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.026417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.034049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.034426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.034470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.041044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.041362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.041391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.317 [2024-07-25 09:40:51.048110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.317 [2024-07-25 09:40:51.048488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.317 [2024-07-25 09:40:51.048518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.576 [2024-07-25 09:40:51.055689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.576 [2024-07-25 09:40:51.056028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.576 [2024-07-25 09:40:51.056056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.576 [2024-07-25 09:40:51.063716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.576 [2024-07-25 09:40:51.064011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.576 [2024-07-25 09:40:51.064039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.576 [2024-07-25 09:40:51.072259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.576 [2024-07-25 09:40:51.072627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.576 [2024-07-25 09:40:51.072657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.576 [2024-07-25 09:40:51.079229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.576 [2024-07-25 09:40:51.079552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.079581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.086022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.086402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.086446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.093247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.093589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.093619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.099904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.100260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.100288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.106442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.106753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.106780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.112501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.112818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.112846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.119594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.119896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.119930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.126120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.126430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.126459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.133607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.133903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.133936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.139457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.139766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.139793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.145251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.145567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.145595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.150987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.151273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.151300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.156842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.157134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.157161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.163442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.163763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.163792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.169397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.169702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.169744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.175304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.175669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.175698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.181273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.181600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.181631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.187329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.187660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.187687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.193784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.194080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.194108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.200336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.200670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.200699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.207296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.207638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.207667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.213191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.213502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.213530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.218955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.219239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.219265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.224653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.224960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.230395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.230702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.230728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.236105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.236412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.236439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.241744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.577 [2024-07-25 09:40:51.242026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.577 [2024-07-25 09:40:51.242052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.577 [2024-07-25 09:40:51.247520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.247888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.247914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.253433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.253792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.253829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.259269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.259579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.259608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.266423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.266749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.266777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.272482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.272793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.272821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.278301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.278614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.278648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.284045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.284384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.289799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.290081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.290108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.295487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.295801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.295830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.301200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.301510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.301538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.578 [2024-07-25 09:40:51.307243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.578 [2024-07-25 09:40:51.307632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.578 [2024-07-25 09:40:51.307660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.314153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.314548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.314576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.320588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.320944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.320971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.327221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.327530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.327558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.334251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.334470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.334499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.341430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.341736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.341769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.348450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.348778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.348810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.355753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.356083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.356115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.362822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.363208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.363240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.370043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.370419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.370446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.376619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.376991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.377024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.383575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.383946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.383979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.390408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.390730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.390762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.397458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.397780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.397812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.404678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.405024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.405056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.412130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.412503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.412530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.418877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.419212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.419243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.427127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.427519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.427561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.435651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.435819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.435850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.444091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.444466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.444493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.452329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.452646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.452691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.460292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.460606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.460642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.467162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.467501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.467529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.474021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.474426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.474466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.837 [2024-07-25 09:40:51.481125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.837 [2024-07-25 09:40:51.481464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.837 [2024-07-25 09:40:51.481492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.488298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.488680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.488712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.496643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.496962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.496995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.505292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.505596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.505623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.513655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.513993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.514025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.522064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.522463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.522504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.529967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.530299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.530331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.537085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.537434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.537461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.544134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.544477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.544506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.551161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.551493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.551521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.559162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.559552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.559595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.838 [2024-07-25 09:40:51.567210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:18.838 [2024-07-25 09:40:51.567587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.838 [2024-07-25 09:40:51.567629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.574757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.575086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.575119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.581817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.582208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.582240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.590082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.590466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.590508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.597366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.597743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.597776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.604464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.604793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.604825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.611684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.612031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.612064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.619994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.620318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.627107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.627450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.627477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.633973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.641602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.642004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.642035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.649856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.650215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.657157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.657489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.657522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.664233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.664553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.664580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.672300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.672671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.672700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.680073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.680466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.687462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.687756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.687783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.694724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.695110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.703225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.703593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.703621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.711681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.712008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.712040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.719959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.720354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.727673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.728081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.728113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.736274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.736622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.736649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.744196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.096 [2024-07-25 09:40:51.744526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.096 [2024-07-25 09:40:51.744554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.096 [2024-07-25 09:40:51.753397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.753732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.753764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.761098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.761442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.761470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.768498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.768902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.768933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.775505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.775906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.775938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.782767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.783147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.783178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.790100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.790493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.790533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.797250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.797619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.797646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.804461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.804863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.804896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.812406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.812794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.812826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.097 [2024-07-25 09:40:51.820315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.097 [2024-07-25 09:40:51.820624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.097 [2024-07-25 09:40:51.820666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.829505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.829861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.829893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.838158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.838549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.838577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.847940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.848314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.848346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.857515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.857852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.857885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.864570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.864976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.871740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.872122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.872154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.878952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.879277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.879308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.886372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.886691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.886724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.895722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.896051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.896083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.903508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.903904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.903936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.910871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.911179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.355 [2024-07-25 09:40:51.911210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.355 [2024-07-25 09:40:51.919016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.355 [2024-07-25 09:40:51.919349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.919401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.927667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.928030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.928061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.937588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.937938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.937966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.945187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.945521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.945549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.952572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.952919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.952951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.959664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.960006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.960037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.968549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.968964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.968997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.976948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.977274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.977306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.984694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.985024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.985056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.992153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.992488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.992531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:51.999431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:51.999809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:51.999842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.007139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.007228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.007259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.016049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.016398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.016442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.024111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.024490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.024532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.032983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.033409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.033435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.041455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.041850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.041881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.049455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.049764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.049796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.058045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.058380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.058424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.065654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.065992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.066024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.072851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.073273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.079932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.080257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.080288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.356 [2024-07-25 09:40:52.087657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.356 [2024-07-25 09:40:52.088054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.356 [2024-07-25 09:40:52.088085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.095194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.095555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.103020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.103346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.103399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.111421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.111740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.111772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.119716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.120061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.120093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.127573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.127975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.128006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.135402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.135738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.135770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.142623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.142996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.143028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.614 [2024-07-25 09:40:52.150037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.614 [2024-07-25 09:40:52.150434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.614 [2024-07-25 09:40:52.150475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.157848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.158237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.158269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.165743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.166127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.166160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.172727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.173084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.173116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.179367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.179736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.185875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.186198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.186230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.192380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.192700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.192727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.199049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.199482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.199513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.206965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.207344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.207399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.214385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.214702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.214734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.221272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.221591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.221618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.228290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.228600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.228627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.235022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.235425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.235467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.241610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.241945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.241976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.248433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.248802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.248834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.256026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.256342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.256398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.262993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.263318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.263350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.269487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.269831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.269863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.276090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.276430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.276458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.282443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.282764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.282796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.288969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.289288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.289320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.295264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.295572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.295599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.301476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.301801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.301832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.308530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.308856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.615 [2024-07-25 09:40:52.308889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.615 [2024-07-25 09:40:52.314875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.615 [2024-07-25 09:40:52.315194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.616 [2024-07-25 09:40:52.315226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.616 [2024-07-25 09:40:52.321102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.616 [2024-07-25 09:40:52.321436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.616 [2024-07-25 09:40:52.321464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.616 [2024-07-25 09:40:52.327289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.616 [2024-07-25 09:40:52.327591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.616 [2024-07-25 09:40:52.327618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.616 [2024-07-25 09:40:52.334012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.616 [2024-07-25 09:40:52.334329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.616 [2024-07-25 09:40:52.334371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.616 [2024-07-25 09:40:52.340868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.616 [2024-07-25 09:40:52.341208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.616 [2024-07-25 09:40:52.341239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.348608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.348959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.348991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.355867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.356185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.356218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.362900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.363229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.363261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.369152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.369486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.369514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.375454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.375789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.375827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.381966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.382280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.382311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.389184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.389516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.389544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.396500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.396871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.396903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.403227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.403583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.403611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.409860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.410217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.410247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.416654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.416974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.417005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.422996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.423310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.423341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.429559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.429979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.430021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.874 [2024-07-25 09:40:52.436371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.874 [2024-07-25 09:40:52.436483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.874 [2024-07-25 09:40:52.436508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.444017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.444340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.444376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.451669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.451985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.452018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.458451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.458770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.458803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.465225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.465540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.465567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.471556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.471908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.471939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.478008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.478415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.478442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.484433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.484760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.484791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.490799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.491120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.491151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.497409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.497719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.497750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.505264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.505625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.505652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.511832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.512157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.512188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.518190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.518509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.518535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.524714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.525040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.525071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.531006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.531084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.531113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.538369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.538660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.538711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.545650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.546019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.546052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.552216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.552534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.552566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.558580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.558930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.558963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.565170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.565548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.565589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.571776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.572194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.572237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.578324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.578722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.578754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.584827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.585148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.585179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.591838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.592162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.592193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.599010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.599467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.599494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.875 [2024-07-25 09:40:52.606647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:19.875 [2024-07-25 09:40:52.607018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.875 [2024-07-25 09:40:52.607050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.613762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.614097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.614128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.620808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.621124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.621156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.628021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.628339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.628379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.635113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.635431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.635457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.642541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.642963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.649180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.649509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.649536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.656230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.656570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.656598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.663623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.663941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.663972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.670805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.671169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.671201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.678121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.678454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.678481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.684597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.684950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.684982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.691106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.691504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.697768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.698107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.698138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.704204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.704528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.704555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.710753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.711107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.711138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.717234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.717544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.717572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.725029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.725408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.725435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.732919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.733235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.733280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.739538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.739874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.739906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.746088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.746426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.746453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.752553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.752993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.753023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.134 [2024-07-25 09:40:52.760002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.134 [2024-07-25 09:40:52.760416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.134 [2024-07-25 09:40:52.760445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.766771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.767091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.767122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.773409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.773745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.773776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.779877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.780218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.780249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.787491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.787964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.787996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.794737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.795058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.795090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.802688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.803044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.803076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.811563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.811928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.811960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.819258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.819612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.819639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.827325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.827702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.827734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.835990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.836309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.836346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.844593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.844911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.844942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.851990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.852376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.852423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.858946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.859243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.859276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.135 [2024-07-25 09:40:52.865492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.135 [2024-07-25 09:40:52.865828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.135 [2024-07-25 09:40:52.865860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.872271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.872576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.872605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.879427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.879713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.879747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.887540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.887865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.887897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.894464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.894825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.894856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.901090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.901397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.901442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.907603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.907959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.907990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.914163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.914455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.914482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.393 [2024-07-25 09:40:52.920720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf70270) with pdu=0x2000190fef90 00:26:20.393 [2024-07-25 09:40:52.921086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.393 [2024-07-25 09:40:52.921124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.393 00:26:20.393 Latency(us) 00:26:20.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.393 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:20.393 nvme0n1 : 2.00 4314.08 539.26 0.00 0.00 3700.35 2002.49 9903.22 00:26:20.393 =================================================================================================================== 00:26:20.393 Total : 4314.08 539.26 0.00 0.00 3700.35 2002.49 9903.22 00:26:20.393 0 00:26:20.393 09:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:20.393 09:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:20.393 09:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.393 09:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:20.393 | .driver_specific 00:26:20.393 | .nvme_error 00:26:20.393 | .status_code 00:26:20.393 | .command_transient_transport_error' 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 278 > 0 )) 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 623048 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 623048 ']' 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 623048 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 623048 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 623048' 00:26:20.651 killing process with pid 623048 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 623048 00:26:20.651 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.651 00:26:20.651 Latency(us) 00:26:20.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.651 =================================================================================================================== 00:26:20.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.651 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 623048 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 621678 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 621678 ']' 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 621678 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 621678 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 621678' 00:26:20.908 killing process with pid 621678 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 621678 00:26:20.908 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 621678 00:26:21.165 00:26:21.165 real 0m15.461s 00:26:21.165 user 0m30.002s 00:26:21.165 sys 0m5.035s 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.165 ************************************ 00:26:21.165 END TEST nvmf_digest_error 00:26:21.165 ************************************ 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.165 rmmod nvme_tcp 00:26:21.165 rmmod nvme_fabrics 00:26:21.165 rmmod nvme_keyring 00:26:21.165 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 621678 ']' 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 621678 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 621678 ']' 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 621678 00:26:21.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (621678) - No such process 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 621678 is not found' 00:26:21.166 Process with pid 621678 is not found 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.166 09:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:23.693 00:26:23.693 real 0m36.148s 00:26:23.693 user 1m1.756s 00:26:23.693 sys 0m11.773s 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:23.693 ************************************ 00:26:23.693 END TEST nvmf_digest 00:26:23.693 ************************************ 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.693 ************************************ 00:26:23.693 START TEST nvmf_bdevperf 00:26:23.693 ************************************ 00:26:23.693 09:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:23.693 * Looking for test storage... 00:26:23.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:23.693 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:23.694 09:40:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:25.594 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:25.594 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:25.594 Found net devices under 0000:82:00.0: cvl_0_0 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.594 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:25.595 Found net devices under 0000:82:00.1: cvl_0_1 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:25.595 09:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:25.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:26:25.595 00:26:25.595 --- 10.0.0.2 ping statistics --- 00:26:25.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.595 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:25.595 00:26:25.595 --- 10.0.0.1 ping statistics --- 00:26:25.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.595 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=625512 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 625512 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 625512 ']' 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.595 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.595 [2024-07-25 09:40:58.139452] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:25.595 [2024-07-25 09:40:58.139550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.595 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.595 [2024-07-25 09:40:58.206553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:25.853 [2024-07-25 09:40:58.328150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.853 [2024-07-25 09:40:58.328199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.853 [2024-07-25 09:40:58.328215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.853 [2024-07-25 09:40:58.328230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.853 [2024-07-25 09:40:58.328242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.853 [2024-07-25 09:40:58.328339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.853 [2024-07-25 09:40:58.331377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.853 [2024-07-25 09:40:58.331390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.853 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.853 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:25.853 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.854 [2024-07-25 09:40:58.487316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.854 Malloc0 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.854 [2024-07-25 09:40:58.557950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:25.854 { 00:26:25.854 "params": { 00:26:25.854 "name": "Nvme$subsystem", 00:26:25.854 "trtype": "$TEST_TRANSPORT", 00:26:25.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:25.854 "adrfam": "ipv4", 00:26:25.854 "trsvcid": "$NVMF_PORT", 00:26:25.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:25.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:25.854 "hdgst": ${hdgst:-false}, 00:26:25.854 "ddgst": ${ddgst:-false} 00:26:25.854 }, 00:26:25.854 "method": "bdev_nvme_attach_controller" 00:26:25.854 } 00:26:25.854 EOF 00:26:25.854 )") 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:25.854 09:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:25.854 "params": { 00:26:25.854 "name": "Nvme1", 00:26:25.854 "trtype": "tcp", 00:26:25.854 "traddr": "10.0.0.2", 00:26:25.854 "adrfam": "ipv4", 00:26:25.854 "trsvcid": "4420", 00:26:25.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:25.854 "hdgst": false, 00:26:25.854 "ddgst": false 00:26:25.854 }, 00:26:25.854 "method": "bdev_nvme_attach_controller" 00:26:25.854 }' 00:26:26.112 [2024-07-25 09:40:58.607702] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:26.112 [2024-07-25 09:40:58.607771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625546 ] 00:26:26.112 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.112 [2024-07-25 09:40:58.670845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.112 [2024-07-25 09:40:58.788900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.370 Running I/O for 1 seconds... 00:26:27.743 00:26:27.743 Latency(us) 00:26:27.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.743 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:27.743 Verification LBA range: start 0x0 length 0x4000 00:26:27.743 Nvme1n1 : 1.01 8820.94 34.46 0.00 0.00 14445.59 3009.80 14757.74 00:26:27.743 =================================================================================================================== 00:26:27.743 Total : 8820.94 34.46 0.00 0.00 14445.59 3009.80 14757.74 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=625746 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.743 { 00:26:27.743 "params": { 00:26:27.743 "name": "Nvme$subsystem", 00:26:27.743 "trtype": "$TEST_TRANSPORT", 00:26:27.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.743 "adrfam": "ipv4", 00:26:27.743 "trsvcid": "$NVMF_PORT", 00:26:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.743 "hdgst": ${hdgst:-false}, 00:26:27.743 "ddgst": ${ddgst:-false} 00:26:27.743 }, 00:26:27.743 "method": "bdev_nvme_attach_controller" 00:26:27.743 } 00:26:27.743 EOF 00:26:27.743 )") 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:27.743 09:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.743 "params": { 00:26:27.743 "name": "Nvme1", 00:26:27.743 "trtype": "tcp", 00:26:27.743 "traddr": "10.0.0.2", 00:26:27.743 "adrfam": "ipv4", 00:26:27.743 "trsvcid": "4420", 00:26:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.743 "hdgst": false, 00:26:27.743 "ddgst": false 00:26:27.743 }, 00:26:27.743 "method": "bdev_nvme_attach_controller" 00:26:27.743 }' 00:26:27.743 [2024-07-25 09:41:00.349938] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:27.743 [2024-07-25 09:41:00.350022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625746 ] 00:26:27.743 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.743 [2024-07-25 09:41:00.413992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.001 [2024-07-25 09:41:00.523010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.258 Running I/O for 15 seconds... 00:26:30.792 09:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 625512 00:26:30.792 09:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:30.792 [2024-07-25 09:41:03.318016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.792 [2024-07-25 09:41:03.318079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.792 [2024-07-25 09:41:03.318125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.792 [2024-07-25 09:41:03.318144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.792 [2024-07-25 09:41:03.318164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.792 [2024-07-25 09:41:03.318181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.792 [2024-07-25 09:41:03.318200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.792 [2024-07-25 09:41:03.318217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.792 [2024-07-25 09:41:03.318234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.792 [2024-07-25 09:41:03.318251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.792 [2024-07-25 09:41:03.318269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.792 [2024-07-25 09:41:03.318287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.793 [2024-07-25 09:41:03.318322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.318983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.318998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-07-25 09:41:03.319540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.793 [2024-07-25 09:41:03.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.319569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.319600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.319629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.319713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.319745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.319971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.319988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-07-25 09:41:03.320610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.320970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.794 [2024-07-25 09:41:03.320985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.794 [2024-07-25 09:41:03.321002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.321973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.321988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.795 [2024-07-25 09:41:03.322243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.795 [2024-07-25 09:41:03.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.796 [2024-07-25 09:41:03.322292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.796 [2024-07-25 09:41:03.322324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.796 [2024-07-25 09:41:03.322363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.796 [2024-07-25 09:41:03.322398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.796 [2024-07-25 09:41:03.322452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262830 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.322492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:30.796 [2024-07-25 09:41:03.322504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:30.796 [2024-07-25 09:41:03.322515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55336 len:8 PRP1 0x0 PRP2 0x0 00:26:30.796 [2024-07-25 09:41:03.322528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.796 [2024-07-25 09:41:03.322594] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1262830 was disconnected and freed. reset controller. 00:26:30.796 [2024-07-25 09:41:03.326474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.326553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.327325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.327384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.327419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.327649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.327903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.327927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.327944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.331558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.340693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.341153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.341200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.341218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.341480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.341724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.341748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.341764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.345351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.354663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.355141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.355173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.355191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.355444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.355689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.355713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.355729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.359309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.368588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.369047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.369079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.369097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.369338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.369593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.369618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.369634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.373212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.382521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.382945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.382976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.382999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.383240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.383496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.383521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.383536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.387114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.396416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.396901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.396952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.396969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.397209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.397464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.397489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.397504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.401083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.410392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.410832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.410863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.410881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.411120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.411376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.411400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.411415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.414995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.424299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.424774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.424806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.424824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.796 [2024-07-25 09:41:03.425063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.796 [2024-07-25 09:41:03.425305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.796 [2024-07-25 09:41:03.425334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.796 [2024-07-25 09:41:03.425350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.796 [2024-07-25 09:41:03.428941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.796 [2024-07-25 09:41:03.438235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.796 [2024-07-25 09:41:03.438701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.796 [2024-07-25 09:41:03.438732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.796 [2024-07-25 09:41:03.438750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.797 [2024-07-25 09:41:03.438988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.797 [2024-07-25 09:41:03.439231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.797 [2024-07-25 09:41:03.439254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.797 [2024-07-25 09:41:03.439270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.797 [2024-07-25 09:41:03.442875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.797 [2024-07-25 09:41:03.452176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.797 [2024-07-25 09:41:03.452658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.797 [2024-07-25 09:41:03.452689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.797 [2024-07-25 09:41:03.452707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.797 [2024-07-25 09:41:03.452945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.797 [2024-07-25 09:41:03.453188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.797 [2024-07-25 09:41:03.453211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.797 [2024-07-25 09:41:03.453226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.797 [2024-07-25 09:41:03.456815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.797 [2024-07-25 09:41:03.466122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.797 [2024-07-25 09:41:03.466602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.797 [2024-07-25 09:41:03.466633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.797 [2024-07-25 09:41:03.466651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.797 [2024-07-25 09:41:03.466890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.797 [2024-07-25 09:41:03.467133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.797 [2024-07-25 09:41:03.467156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.797 [2024-07-25 09:41:03.467172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.797 [2024-07-25 09:41:03.470764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.797 [2024-07-25 09:41:03.480060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.798 [2024-07-25 09:41:03.480504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.798 [2024-07-25 09:41:03.480535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.798 [2024-07-25 09:41:03.480553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.798 [2024-07-25 09:41:03.480792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.798 [2024-07-25 09:41:03.481035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.798 [2024-07-25 09:41:03.481059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.798 [2024-07-25 09:41:03.481073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.798 [2024-07-25 09:41:03.484662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.798 [2024-07-25 09:41:03.493958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.798 [2024-07-25 09:41:03.494435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.798 [2024-07-25 09:41:03.494466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.798 [2024-07-25 09:41:03.494484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.798 [2024-07-25 09:41:03.494722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.798 [2024-07-25 09:41:03.494965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.798 [2024-07-25 09:41:03.494989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.798 [2024-07-25 09:41:03.495004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.798 [2024-07-25 09:41:03.498593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.798 [2024-07-25 09:41:03.507896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.798 [2024-07-25 09:41:03.508366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.798 [2024-07-25 09:41:03.508398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.798 [2024-07-25 09:41:03.508415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.798 [2024-07-25 09:41:03.508654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:30.798 [2024-07-25 09:41:03.508896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.798 [2024-07-25 09:41:03.508919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.798 [2024-07-25 09:41:03.508935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.798 [2024-07-25 09:41:03.512527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.798 [2024-07-25 09:41:03.521828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.798 [2024-07-25 09:41:03.522266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.798 [2024-07-25 09:41:03.522297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:30.798 [2024-07-25 09:41:03.522314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:30.798 [2024-07-25 09:41:03.522571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.522815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.522839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.522854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.526440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.535741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.536177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.536207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.536225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.536476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.536730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.536754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.536769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.540347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.549663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.550136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.550167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.550185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.550437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.550681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.550705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.550720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.554298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.563763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.564233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.564264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.564282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.564533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.564776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.564800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.564821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.568410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.577747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.578190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.578222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.578240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.578490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.578735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.578758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.578773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.582351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.591667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.592113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.592144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.592161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.592414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.592657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.592681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.592696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.596276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.605582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.606022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.606053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.606071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.606309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.606565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.606589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.606604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.610183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.619489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.619962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.619997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.620016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.620255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.620511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.620535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.620551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.624132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.633439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.633921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.633951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.633969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.634207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.634462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.634486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.634502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.638082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.647426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.061 [2024-07-25 09:41:03.647913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-07-25 09:41:03.647944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.061 [2024-07-25 09:41:03.647962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.061 [2024-07-25 09:41:03.648201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.061 [2024-07-25 09:41:03.648456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.061 [2024-07-25 09:41:03.648480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.061 [2024-07-25 09:41:03.648495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.061 [2024-07-25 09:41:03.652077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.061 [2024-07-25 09:41:03.661407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.661831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.661884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.661902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.662140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.662402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.662426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.662441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.666022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.675344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.675753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.675810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.675828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.676067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.676311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.676335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.676350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.679946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.689302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.689706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.689737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.689755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.689994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.690237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.690260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.690275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.693865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.703176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.703553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.703584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.703602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.703840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.704083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.704107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.704122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.707728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.717036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.717419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.717451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.717468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.717708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.717951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.717975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.717990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.721581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.730813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.731182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.731226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.731242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.731508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.731736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.731756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.731769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.735296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.744274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.744656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.744699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.744715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.744932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.745154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.745174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.745187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.748672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.758305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.758720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.758750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.758773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.759013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.759256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.759280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.759294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.763071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.772195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.772569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.772598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.772614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.772870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.773113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.773136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.773152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.776737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.062 [2024-07-25 09:41:03.786159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.062 [2024-07-25 09:41:03.786521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-07-25 09:41:03.786552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.062 [2024-07-25 09:41:03.786570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.062 [2024-07-25 09:41:03.786809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.062 [2024-07-25 09:41:03.787052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.062 [2024-07-25 09:41:03.787075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.062 [2024-07-25 09:41:03.787090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.062 [2024-07-25 09:41:03.790688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.321 [2024-07-25 09:41:03.800210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.321 [2024-07-25 09:41:03.800573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.321 [2024-07-25 09:41:03.800605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.321 [2024-07-25 09:41:03.800622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.321 [2024-07-25 09:41:03.800861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.321 [2024-07-25 09:41:03.801104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.321 [2024-07-25 09:41:03.801132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.321 [2024-07-25 09:41:03.801148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.321 [2024-07-25 09:41:03.804738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.321 [2024-07-25 09:41:03.814256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.321 [2024-07-25 09:41:03.814671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.321 [2024-07-25 09:41:03.814702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.321 [2024-07-25 09:41:03.814719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.321 [2024-07-25 09:41:03.814958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.321 [2024-07-25 09:41:03.815202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.321 [2024-07-25 09:41:03.815225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.321 [2024-07-25 09:41:03.815240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.321 [2024-07-25 09:41:03.818829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.321 [2024-07-25 09:41:03.828144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.321 [2024-07-25 09:41:03.828629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.321 [2024-07-25 09:41:03.828660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.321 [2024-07-25 09:41:03.828678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.321 [2024-07-25 09:41:03.828916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.321 [2024-07-25 09:41:03.829159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.321 [2024-07-25 09:41:03.829182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.321 [2024-07-25 09:41:03.829197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.321 [2024-07-25 09:41:03.832782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.321 [2024-07-25 09:41:03.842091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.321 [2024-07-25 09:41:03.842539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.321 [2024-07-25 09:41:03.842570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.321 [2024-07-25 09:41:03.842587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.321 [2024-07-25 09:41:03.842826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.321 [2024-07-25 09:41:03.843070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.321 [2024-07-25 09:41:03.843093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.321 [2024-07-25 09:41:03.843108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.321 [2024-07-25 09:41:03.846701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.321 [2024-07-25 09:41:03.856006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.321 [2024-07-25 09:41:03.856496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.321 [2024-07-25 09:41:03.856529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.321 [2024-07-25 09:41:03.856547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.321 [2024-07-25 09:41:03.856786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.321 [2024-07-25 09:41:03.857029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.857053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.857068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.860666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.869967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.870449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.870480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.870498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.870737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.870979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.871002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.871017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.874605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.883905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.884314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.884344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.884374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.884615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.884858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.884881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.884896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.888483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.897802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.898257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.898309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.898335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.898587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.898831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.898855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.898870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.902458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.911756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.912219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.912250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.912268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.912520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.912765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.912788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.912803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.916388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.925688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.926157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.926188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.926206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.926457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.926701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.926725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.926740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.930319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.939625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.940100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.940130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.940148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.940399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.940642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.940671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.940687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.944282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.953608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.954012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.954043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.954060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.954300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.954552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.954576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.954591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.958174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.967656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.968047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.968105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.968123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.968373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.968617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.968641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.968657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.972239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.981548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.981922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.981953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.981971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.982210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.982464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.982488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.322 [2024-07-25 09:41:03.982503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.322 [2024-07-25 09:41:03.986082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.322 [2024-07-25 09:41:03.995603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.322 [2024-07-25 09:41:03.995985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.322 [2024-07-25 09:41:03.996016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.322 [2024-07-25 09:41:03.996033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.322 [2024-07-25 09:41:03.996272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.322 [2024-07-25 09:41:03.996529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.322 [2024-07-25 09:41:03.996553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.323 [2024-07-25 09:41:03.996569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.323 [2024-07-25 09:41:04.000150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.323 [2024-07-25 09:41:04.009479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.323 [2024-07-25 09:41:04.009919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.323 [2024-07-25 09:41:04.009950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.323 [2024-07-25 09:41:04.009968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.323 [2024-07-25 09:41:04.010207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.323 [2024-07-25 09:41:04.010462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.323 [2024-07-25 09:41:04.010486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.323 [2024-07-25 09:41:04.010501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.323 [2024-07-25 09:41:04.014082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.323 [2024-07-25 09:41:04.023405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.323 [2024-07-25 09:41:04.023827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.323 [2024-07-25 09:41:04.023882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.323 [2024-07-25 09:41:04.023899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.323 [2024-07-25 09:41:04.024138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.323 [2024-07-25 09:41:04.024413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.323 [2024-07-25 09:41:04.024438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.323 [2024-07-25 09:41:04.024454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.323 [2024-07-25 09:41:04.028035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.323 [2024-07-25 09:41:04.037344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.323 [2024-07-25 09:41:04.037760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.323 [2024-07-25 09:41:04.037810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.323 [2024-07-25 09:41:04.037828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.323 [2024-07-25 09:41:04.038076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.323 [2024-07-25 09:41:04.038319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.323 [2024-07-25 09:41:04.038342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.323 [2024-07-25 09:41:04.038366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.323 [2024-07-25 09:41:04.041953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.323 [2024-07-25 09:41:04.051263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.323 [2024-07-25 09:41:04.051682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.323 [2024-07-25 09:41:04.051736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.323 [2024-07-25 09:41:04.051754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.323 [2024-07-25 09:41:04.051993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.323 [2024-07-25 09:41:04.052237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.323 [2024-07-25 09:41:04.052261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.323 [2024-07-25 09:41:04.052276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.582 [2024-07-25 09:41:04.055869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.582 [2024-07-25 09:41:04.065183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.582 [2024-07-25 09:41:04.065653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.582 [2024-07-25 09:41:04.065684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.582 [2024-07-25 09:41:04.065702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.582 [2024-07-25 09:41:04.065940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.582 [2024-07-25 09:41:04.066183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.582 [2024-07-25 09:41:04.066206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.582 [2024-07-25 09:41:04.066221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.582 [2024-07-25 09:41:04.069815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.582 [2024-07-25 09:41:04.079144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.582 [2024-07-25 09:41:04.079518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.582 [2024-07-25 09:41:04.079550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.582 [2024-07-25 09:41:04.079567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.582 [2024-07-25 09:41:04.079806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.582 [2024-07-25 09:41:04.080049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.582 [2024-07-25 09:41:04.080072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.582 [2024-07-25 09:41:04.080092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.582 [2024-07-25 09:41:04.083694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.582 [2024-07-25 09:41:04.093014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.582 [2024-07-25 09:41:04.093449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.582 [2024-07-25 09:41:04.093482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.582 [2024-07-25 09:41:04.093500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.582 [2024-07-25 09:41:04.093739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.582 [2024-07-25 09:41:04.093982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.582 [2024-07-25 09:41:04.094005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.582 [2024-07-25 09:41:04.094021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.582 [2024-07-25 09:41:04.097617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.582 [2024-07-25 09:41:04.106947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.582 [2024-07-25 09:41:04.107413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.582 [2024-07-25 09:41:04.107444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.582 [2024-07-25 09:41:04.107462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.582 [2024-07-25 09:41:04.107701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.582 [2024-07-25 09:41:04.107944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.582 [2024-07-25 09:41:04.107968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.582 [2024-07-25 09:41:04.107983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.582 [2024-07-25 09:41:04.111575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.582 [2024-07-25 09:41:04.120884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.582 [2024-07-25 09:41:04.121353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.582 [2024-07-25 09:41:04.121393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.582 [2024-07-25 09:41:04.121411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.582 [2024-07-25 09:41:04.121650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.582 [2024-07-25 09:41:04.121893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.582 [2024-07-25 09:41:04.121916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.582 [2024-07-25 09:41:04.121931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.582 [2024-07-25 09:41:04.125524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.134828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.135299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.135352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.135384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.135623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.135866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.135889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.135904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.139490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.148698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.149166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.149196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.149213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.149470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.149698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.149719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.149733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.153062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.162230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.162698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.162730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.162749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.162984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.163190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.163210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.163223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.166208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.175522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.175933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.175957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.175971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.176181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.176414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.176439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.176453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.179442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.188806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.189252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.189277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.189306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.189532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.189751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.189770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.189783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.192771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.202042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.202447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.202473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.202487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.202717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.202916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.202935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.202947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.205974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.215248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.215701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.215740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.215754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.215964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.216163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.216182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.216194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.219188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.228481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.228936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.228975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.228990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.229185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.229411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.229432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.229445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.232428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.241723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.242096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.242134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.242148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.242385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.242591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.242611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.242623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.245621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.254918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.583 [2024-07-25 09:41:04.255348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.583 [2024-07-25 09:41:04.255393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.583 [2024-07-25 09:41:04.255409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.583 [2024-07-25 09:41:04.255624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.583 [2024-07-25 09:41:04.255840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.583 [2024-07-25 09:41:04.255859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.583 [2024-07-25 09:41:04.255871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.583 [2024-07-25 09:41:04.258862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.583 [2024-07-25 09:41:04.268221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.584 [2024-07-25 09:41:04.268717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.584 [2024-07-25 09:41:04.268742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.584 [2024-07-25 09:41:04.268760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.584 [2024-07-25 09:41:04.268970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.584 [2024-07-25 09:41:04.269169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.584 [2024-07-25 09:41:04.269188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.584 [2024-07-25 09:41:04.269200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.584 [2024-07-25 09:41:04.272222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.584 [2024-07-25 09:41:04.281540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.584 [2024-07-25 09:41:04.282017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.584 [2024-07-25 09:41:04.282056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.584 [2024-07-25 09:41:04.282071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.584 [2024-07-25 09:41:04.282265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.584 [2024-07-25 09:41:04.282495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.584 [2024-07-25 09:41:04.282516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.584 [2024-07-25 09:41:04.282529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.584 [2024-07-25 09:41:04.285512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.584 [2024-07-25 09:41:04.294839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.584 [2024-07-25 09:41:04.295288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.584 [2024-07-25 09:41:04.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.584 [2024-07-25 09:41:04.295342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.584 [2024-07-25 09:41:04.295574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.584 [2024-07-25 09:41:04.295793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.584 [2024-07-25 09:41:04.295812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.584 [2024-07-25 09:41:04.295825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.584 [2024-07-25 09:41:04.298815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.584 [2024-07-25 09:41:04.308128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.584 [2024-07-25 09:41:04.308506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.584 [2024-07-25 09:41:04.308547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.584 [2024-07-25 09:41:04.308561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.584 [2024-07-25 09:41:04.308788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.584 [2024-07-25 09:41:04.308987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.584 [2024-07-25 09:41:04.309012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.584 [2024-07-25 09:41:04.309025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.584 [2024-07-25 09:41:04.312252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.321802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.322257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.322298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.322314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.322551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.843 [2024-07-25 09:41:04.322801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.843 [2024-07-25 09:41:04.322822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.843 [2024-07-25 09:41:04.322836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.843 [2024-07-25 09:41:04.326189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.335164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.335567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.335592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.335606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.335834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.843 [2024-07-25 09:41:04.336033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.843 [2024-07-25 09:41:04.336052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.843 [2024-07-25 09:41:04.336064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.843 [2024-07-25 09:41:04.339052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.348417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.348888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.348929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.348944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.349140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.843 [2024-07-25 09:41:04.349353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.843 [2024-07-25 09:41:04.349381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.843 [2024-07-25 09:41:04.349395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.843 [2024-07-25 09:41:04.352382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.361773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.362201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.362244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.362262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.362522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.843 [2024-07-25 09:41:04.362788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.843 [2024-07-25 09:41:04.362811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.843 [2024-07-25 09:41:04.362827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.843 [2024-07-25 09:41:04.365876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.375046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.375471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.375497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.375527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.375743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.843 [2024-07-25 09:41:04.375943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.843 [2024-07-25 09:41:04.375962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.843 [2024-07-25 09:41:04.375974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.843 [2024-07-25 09:41:04.378959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.388392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.388811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.388851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.388866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.389082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.843 [2024-07-25 09:41:04.389299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.843 [2024-07-25 09:41:04.389318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.843 [2024-07-25 09:41:04.389330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.843 [2024-07-25 09:41:04.392321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.843 [2024-07-25 09:41:04.401689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.843 [2024-07-25 09:41:04.402102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.843 [2024-07-25 09:41:04.402129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.843 [2024-07-25 09:41:04.402143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.843 [2024-07-25 09:41:04.402384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.402590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.402610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.402623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.405605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.414909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.415340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.415385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.415409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.415624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.415840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.415859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.415872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.418860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.428137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.428608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.428647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.428663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.428875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.429074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.429093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.429105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.432093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.441391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.441821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.441846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.441874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.442070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.442268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.442287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.442311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.445317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.454721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.455100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.455140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.455154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.455390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.455596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.455616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.455629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.458619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.467910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.468289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.468328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.468341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.468579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.468798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.468818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.468830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.471854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.481153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.481634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.481661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.481676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.481877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.482093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.482113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.482125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.485221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.494488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.494930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.494974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.494989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.495185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.495411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.495432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.495445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.498425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.507765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.508177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.508202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.508216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.508455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.508676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.508696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.508708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.844 [2024-07-25 09:41:04.511691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.844 [2024-07-25 09:41:04.520965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.844 [2024-07-25 09:41:04.521411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.844 [2024-07-25 09:41:04.521451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.844 [2024-07-25 09:41:04.521465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.844 [2024-07-25 09:41:04.521681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.844 [2024-07-25 09:41:04.521896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.844 [2024-07-25 09:41:04.521916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.844 [2024-07-25 09:41:04.521928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.845 [2024-07-25 09:41:04.524914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.845 [2024-07-25 09:41:04.534188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.845 [2024-07-25 09:41:04.534634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.845 [2024-07-25 09:41:04.534674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.845 [2024-07-25 09:41:04.534688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.845 [2024-07-25 09:41:04.534903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.845 [2024-07-25 09:41:04.535102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.845 [2024-07-25 09:41:04.535122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.845 [2024-07-25 09:41:04.535134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.845 [2024-07-25 09:41:04.538127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.845 [2024-07-25 09:41:04.547427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.845 [2024-07-25 09:41:04.547884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.845 [2024-07-25 09:41:04.547909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.845 [2024-07-25 09:41:04.547938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.845 [2024-07-25 09:41:04.548134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.845 [2024-07-25 09:41:04.548332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.845 [2024-07-25 09:41:04.548352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.845 [2024-07-25 09:41:04.548388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.845 [2024-07-25 09:41:04.551379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.845 [2024-07-25 09:41:04.560726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.845 [2024-07-25 09:41:04.561175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.845 [2024-07-25 09:41:04.561214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.845 [2024-07-25 09:41:04.561230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.845 [2024-07-25 09:41:04.561455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.845 [2024-07-25 09:41:04.561675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.845 [2024-07-25 09:41:04.561694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.845 [2024-07-25 09:41:04.561707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.845 [2024-07-25 09:41:04.564862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.845 [2024-07-25 09:41:04.574396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.845 [2024-07-25 09:41:04.574853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.845 [2024-07-25 09:41:04.574880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:31.845 [2024-07-25 09:41:04.574895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:31.845 [2024-07-25 09:41:04.575104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:31.845 [2024-07-25 09:41:04.575315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.845 [2024-07-25 09:41:04.575351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.845 [2024-07-25 09:41:04.575382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.104 [2024-07-25 09:41:04.578758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.104 [2024-07-25 09:41:04.587806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.104 [2024-07-25 09:41:04.588229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.104 [2024-07-25 09:41:04.588268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.104 [2024-07-25 09:41:04.588283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.104 [2024-07-25 09:41:04.588525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.104 [2024-07-25 09:41:04.588764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.104 [2024-07-25 09:41:04.588784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.104 [2024-07-25 09:41:04.588797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.104 [2024-07-25 09:41:04.591878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.104 [2024-07-25 09:41:04.601235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.104 [2024-07-25 09:41:04.601708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.104 [2024-07-25 09:41:04.601733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.104 [2024-07-25 09:41:04.601747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.104 [2024-07-25 09:41:04.601957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.104 [2024-07-25 09:41:04.602157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.104 [2024-07-25 09:41:04.602176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.104 [2024-07-25 09:41:04.602189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.104 [2024-07-25 09:41:04.605179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.104 [2024-07-25 09:41:04.614521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.104 [2024-07-25 09:41:04.614947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.104 [2024-07-25 09:41:04.614978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.104 [2024-07-25 09:41:04.615007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.104 [2024-07-25 09:41:04.615203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.104 [2024-07-25 09:41:04.615430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.104 [2024-07-25 09:41:04.615450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.104 [2024-07-25 09:41:04.615463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.104 [2024-07-25 09:41:04.618451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.104 [2024-07-25 09:41:04.627806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.104 [2024-07-25 09:41:04.628239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.104 [2024-07-25 09:41:04.628282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.104 [2024-07-25 09:41:04.628297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.104 [2024-07-25 09:41:04.628524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.104 [2024-07-25 09:41:04.628743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.104 [2024-07-25 09:41:04.628762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.104 [2024-07-25 09:41:04.628775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.104 [2024-07-25 09:41:04.631758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.104 [2024-07-25 09:41:04.641027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.104 [2024-07-25 09:41:04.641450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.104 [2024-07-25 09:41:04.641476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.104 [2024-07-25 09:41:04.641506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.104 [2024-07-25 09:41:04.641720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.104 [2024-07-25 09:41:04.641920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.104 [2024-07-25 09:41:04.641939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.104 [2024-07-25 09:41:04.641951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.104 [2024-07-25 09:41:04.644951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.104 [2024-07-25 09:41:04.654224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.104 [2024-07-25 09:41:04.654655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.104 [2024-07-25 09:41:04.654679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.104 [2024-07-25 09:41:04.654693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.104 [2024-07-25 09:41:04.654903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.104 [2024-07-25 09:41:04.655102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.655121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.655134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.658158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.667461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.667878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.667903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.667931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.668127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.668331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.668376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.668390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.671414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.680735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.681211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.681236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.681250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.681490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.681724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.681744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.681757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.684764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.694078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.694490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.694515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.694544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.694757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.694956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.694976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.694988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.697947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.707455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.707880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.707904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.707933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.708128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.708327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.708370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.708385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.711375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.720669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.721090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.721114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.721143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.721354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.721570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.721590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.721602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.724588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.733876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.734303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.734328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.734365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.734584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.734801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.734821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.734833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.737817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.747106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.747487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.747527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.747541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.747769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.747968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.747988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.748001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.750989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.760413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.760869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.760907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.760927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.761123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.761322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.761362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.761378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.764534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.773708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.774118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.774142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.774156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.774400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.105 [2024-07-25 09:41:04.774606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.105 [2024-07-25 09:41:04.774626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.105 [2024-07-25 09:41:04.774639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.105 [2024-07-25 09:41:04.777624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.105 [2024-07-25 09:41:04.786957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.105 [2024-07-25 09:41:04.787418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.105 [2024-07-25 09:41:04.787444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.105 [2024-07-25 09:41:04.787459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.105 [2024-07-25 09:41:04.787676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.106 [2024-07-25 09:41:04.787892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.106 [2024-07-25 09:41:04.787911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.106 [2024-07-25 09:41:04.787923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.106 [2024-07-25 09:41:04.790952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.106 [2024-07-25 09:41:04.800302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.106 [2024-07-25 09:41:04.800748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.106 [2024-07-25 09:41:04.800788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.106 [2024-07-25 09:41:04.800803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.106 [2024-07-25 09:41:04.801012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.106 [2024-07-25 09:41:04.801212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.106 [2024-07-25 09:41:04.801236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.106 [2024-07-25 09:41:04.801249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.106 [2024-07-25 09:41:04.804240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.106 [2024-07-25 09:41:04.813595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.106 [2024-07-25 09:41:04.813946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.106 [2024-07-25 09:41:04.813972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.106 [2024-07-25 09:41:04.813986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.106 [2024-07-25 09:41:04.814182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.106 [2024-07-25 09:41:04.814407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.106 [2024-07-25 09:41:04.814428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.106 [2024-07-25 09:41:04.814441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.106 [2024-07-25 09:41:04.817429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.106 [2024-07-25 09:41:04.826980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.106 [2024-07-25 09:41:04.827367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.106 [2024-07-25 09:41:04.827395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.106 [2024-07-25 09:41:04.827412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.106 [2024-07-25 09:41:04.827627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.106 [2024-07-25 09:41:04.827845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.106 [2024-07-25 09:41:04.827867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.106 [2024-07-25 09:41:04.827880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.106 [2024-07-25 09:41:04.831329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.365 [2024-07-25 09:41:04.840733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.365 [2024-07-25 09:41:04.841117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-07-25 09:41:04.841145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.365 [2024-07-25 09:41:04.841160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.365 [2024-07-25 09:41:04.841385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.365 [2024-07-25 09:41:04.841604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.365 [2024-07-25 09:41:04.841626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.365 [2024-07-25 09:41:04.841640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.365 [2024-07-25 09:41:04.844964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.365 [2024-07-25 09:41:04.853979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.365 [2024-07-25 09:41:04.854351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-07-25 09:41:04.854383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.365 [2024-07-25 09:41:04.854399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.365 [2024-07-25 09:41:04.854600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.365 [2024-07-25 09:41:04.854816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.365 [2024-07-25 09:41:04.854836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.365 [2024-07-25 09:41:04.854848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.365 [2024-07-25 09:41:04.857873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.365 [2024-07-25 09:41:04.867216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.365 [2024-07-25 09:41:04.867554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-07-25 09:41:04.867580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.365 [2024-07-25 09:41:04.867595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.365 [2024-07-25 09:41:04.867808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.365 [2024-07-25 09:41:04.868007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.365 [2024-07-25 09:41:04.868027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.365 [2024-07-25 09:41:04.868039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.365 [2024-07-25 09:41:04.871069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.365 [2024-07-25 09:41:04.880553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.365 [2024-07-25 09:41:04.880944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-07-25 09:41:04.880983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.365 [2024-07-25 09:41:04.880997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.365 [2024-07-25 09:41:04.881207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.365 [2024-07-25 09:41:04.881434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.881455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.881468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.884520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.893888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.894229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.894254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.894269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.894496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.894716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.894736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.894749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.897741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.907241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.907581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.907608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.907622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.907834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.908033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.908052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.908065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.911056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.920580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.920955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.920981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.920995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.921191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.921417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.921438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.921451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.924480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.933953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.934258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.934283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.934298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.934520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.934740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.934759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.934776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.937769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.947235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.947579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.947605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.947620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.947831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.948030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.948050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.948062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.951046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.960563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.960993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.961018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.961032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.961242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.961467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.961488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.961501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.964662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.973816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.974228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.974253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.974267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.974504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.974724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.974743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.974756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.977740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:04.987065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:04.987486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:04.987512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:04.987543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:04.987756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:04.987956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:04.987975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:04.987987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:04.991011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:05.000329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:05.000785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:05.000810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:05.000838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:05.001034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:05.001233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:05.001252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.366 [2024-07-25 09:41:05.001265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.366 [2024-07-25 09:41:05.004825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.366 [2024-07-25 09:41:05.014333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.366 [2024-07-25 09:41:05.014790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-07-25 09:41:05.014821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.366 [2024-07-25 09:41:05.014838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.366 [2024-07-25 09:41:05.015077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.366 [2024-07-25 09:41:05.015320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.366 [2024-07-25 09:41:05.015343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.367 [2024-07-25 09:41:05.015370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.367 [2024-07-25 09:41:05.018955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.367 [2024-07-25 09:41:05.028256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.367 [2024-07-25 09:41:05.028692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-07-25 09:41:05.028723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.367 [2024-07-25 09:41:05.028743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.367 [2024-07-25 09:41:05.028982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.367 [2024-07-25 09:41:05.029230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.367 [2024-07-25 09:41:05.029254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.367 [2024-07-25 09:41:05.029269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.367 [2024-07-25 09:41:05.032862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.367 [2024-07-25 09:41:05.042159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.367 [2024-07-25 09:41:05.042584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-07-25 09:41:05.042615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.367 [2024-07-25 09:41:05.042633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.367 [2024-07-25 09:41:05.042872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.367 [2024-07-25 09:41:05.043115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.367 [2024-07-25 09:41:05.043139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.367 [2024-07-25 09:41:05.043153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.367 [2024-07-25 09:41:05.046759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.367 [2024-07-25 09:41:05.056057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.367 [2024-07-25 09:41:05.056540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-07-25 09:41:05.056572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.367 [2024-07-25 09:41:05.056589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.367 [2024-07-25 09:41:05.056828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.367 [2024-07-25 09:41:05.057071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.367 [2024-07-25 09:41:05.057095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.367 [2024-07-25 09:41:05.057110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.367 [2024-07-25 09:41:05.060706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.367 [2024-07-25 09:41:05.070003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.367 [2024-07-25 09:41:05.070430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-07-25 09:41:05.070461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.367 [2024-07-25 09:41:05.070479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.367 [2024-07-25 09:41:05.070717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.367 [2024-07-25 09:41:05.070959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.367 [2024-07-25 09:41:05.070983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.367 [2024-07-25 09:41:05.070998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.367 [2024-07-25 09:41:05.074594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.367 [2024-07-25 09:41:05.083888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.367 [2024-07-25 09:41:05.084338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-07-25 09:41:05.084376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.367 [2024-07-25 09:41:05.084396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.367 [2024-07-25 09:41:05.084635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.367 [2024-07-25 09:41:05.084891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.367 [2024-07-25 09:41:05.084912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.367 [2024-07-25 09:41:05.084926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.367 [2024-07-25 09:41:05.088527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.367 [2024-07-25 09:41:05.097828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.626 [2024-07-25 09:41:05.098265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-07-25 09:41:05.098297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.626 [2024-07-25 09:41:05.098315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.626 [2024-07-25 09:41:05.098564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.626 [2024-07-25 09:41:05.098807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.626 [2024-07-25 09:41:05.098830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.626 [2024-07-25 09:41:05.098845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.626 [2024-07-25 09:41:05.102439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.626 [2024-07-25 09:41:05.111749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.626 [2024-07-25 09:41:05.112137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-07-25 09:41:05.112168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.626 [2024-07-25 09:41:05.112186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.626 [2024-07-25 09:41:05.112435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.626 [2024-07-25 09:41:05.112680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.626 [2024-07-25 09:41:05.112703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.626 [2024-07-25 09:41:05.112718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.626 [2024-07-25 09:41:05.116293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.626 [2024-07-25 09:41:05.125796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.626 [2024-07-25 09:41:05.126183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-07-25 09:41:05.126252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.626 [2024-07-25 09:41:05.126271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.126519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.126764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.126787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.126802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.130387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.139685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.140142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.140173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.140190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.140440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.140688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.140712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.140726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.144305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.153637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.154058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.154089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.154107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.154345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.154600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.154624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.154639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.158233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.167505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.167901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.167933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.167950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.168190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.168451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.168475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.168490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.172071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.181372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.181758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.181789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.181807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.182046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.182289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.182313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.182328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.185914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.195213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.195610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.195641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.195659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.195898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.196141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.196164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.196180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.199765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.209060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.209472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.209511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.209529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.209767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.210010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.210034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.210049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.213641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.222950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.223368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.223399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.223417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.223656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.223899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.223923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.223938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.227542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.236838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.237295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.237326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.237344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.237591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.237835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.237859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.237873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.241460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.250768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.251186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.251217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.251235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.627 [2024-07-25 09:41:05.251482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.627 [2024-07-25 09:41:05.251726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.627 [2024-07-25 09:41:05.251750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.627 [2024-07-25 09:41:05.251765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.627 [2024-07-25 09:41:05.255346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.627 [2024-07-25 09:41:05.264662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.627 [2024-07-25 09:41:05.265128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-07-25 09:41:05.265158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.627 [2024-07-25 09:41:05.265181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.265434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.265677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.265701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.265716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.269299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.628 [2024-07-25 09:41:05.278605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.628 [2024-07-25 09:41:05.279054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-07-25 09:41:05.279084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.628 [2024-07-25 09:41:05.279102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.279340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.279596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.279620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.279635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.283213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.628 [2024-07-25 09:41:05.292519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.628 [2024-07-25 09:41:05.292995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-07-25 09:41:05.293026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.628 [2024-07-25 09:41:05.293043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.293281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.293536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.293560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.293575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.297165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.628 [2024-07-25 09:41:05.306468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.628 [2024-07-25 09:41:05.306921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-07-25 09:41:05.306951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.628 [2024-07-25 09:41:05.306968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.307207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.307462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.307492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.307508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.311088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.628 [2024-07-25 09:41:05.320399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.628 [2024-07-25 09:41:05.320769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-07-25 09:41:05.320800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.628 [2024-07-25 09:41:05.320817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.321056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.321299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.321323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.321338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.324931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.628 [2024-07-25 09:41:05.334248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.628 [2024-07-25 09:41:05.334689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-07-25 09:41:05.334720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.628 [2024-07-25 09:41:05.334737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.334975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.335227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.335249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.335262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.338830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.628 [2024-07-25 09:41:05.348134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.628 [2024-07-25 09:41:05.348601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-07-25 09:41:05.348632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.628 [2024-07-25 09:41:05.348650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.628 [2024-07-25 09:41:05.348888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.628 [2024-07-25 09:41:05.349131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.628 [2024-07-25 09:41:05.349154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.628 [2024-07-25 09:41:05.349169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.628 [2024-07-25 09:41:05.352760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.887 [2024-07-25 09:41:05.362383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.887 [2024-07-25 09:41:05.362857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.887 [2024-07-25 09:41:05.362894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.887 [2024-07-25 09:41:05.362916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.887 [2024-07-25 09:41:05.363165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.887 [2024-07-25 09:41:05.363422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.887 [2024-07-25 09:41:05.363447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.887 [2024-07-25 09:41:05.363462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.887 [2024-07-25 09:41:05.367042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.887 [2024-07-25 09:41:05.376339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.887 [2024-07-25 09:41:05.376827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.887 [2024-07-25 09:41:05.376858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.887 [2024-07-25 09:41:05.376875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.887 [2024-07-25 09:41:05.377114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.887 [2024-07-25 09:41:05.377368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.887 [2024-07-25 09:41:05.377392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.887 [2024-07-25 09:41:05.377407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.887 [2024-07-25 09:41:05.380986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.887 [2024-07-25 09:41:05.390281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.887 [2024-07-25 09:41:05.390721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.887 [2024-07-25 09:41:05.390752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.887 [2024-07-25 09:41:05.390770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.887 [2024-07-25 09:41:05.391009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.887 [2024-07-25 09:41:05.391252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.887 [2024-07-25 09:41:05.391275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.887 [2024-07-25 09:41:05.391290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.887 [2024-07-25 09:41:05.394880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.887 [2024-07-25 09:41:05.404197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.887 [2024-07-25 09:41:05.404620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.887 [2024-07-25 09:41:05.404651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.887 [2024-07-25 09:41:05.404668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.887 [2024-07-25 09:41:05.404915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.887 [2024-07-25 09:41:05.405158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.887 [2024-07-25 09:41:05.405182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.887 [2024-07-25 09:41:05.405197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.887 [2024-07-25 09:41:05.408790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.418100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.418581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.418612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.418630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.418869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.419112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.419135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.419150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.422740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.432038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.432506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.432537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.432554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.432794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.433036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.433060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.433075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.436667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.445961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.446445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.446476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.446493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.446732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.446984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.447008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.447030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.450625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.459927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.460397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.460428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.460446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.460685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.460928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.460951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.460966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.464553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.473849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.474322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.474352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.474391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.474630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.474873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.474897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.474911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.478497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.487791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.488242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.488273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.488290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.488541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.488785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.488808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.488823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.492413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.501711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.502190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.502220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.502238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.502489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.502732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.502756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.502771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.506348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.515648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.516116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.516147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.516165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.516415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.516659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.516682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.516697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.520276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.529585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.530017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.530069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.530087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.530325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.530580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.530604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.530619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.534199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.888 [2024-07-25 09:41:05.543508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.888 [2024-07-25 09:41:05.543996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.888 [2024-07-25 09:41:05.544047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.888 [2024-07-25 09:41:05.544065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.888 [2024-07-25 09:41:05.544304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.888 [2024-07-25 09:41:05.544567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.888 [2024-07-25 09:41:05.544591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.888 [2024-07-25 09:41:05.544606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.888 [2024-07-25 09:41:05.548201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.889 [2024-07-25 09:41:05.557504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.889 [2024-07-25 09:41:05.557989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.889 [2024-07-25 09:41:05.558039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.889 [2024-07-25 09:41:05.558057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.889 [2024-07-25 09:41:05.558296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.889 [2024-07-25 09:41:05.558552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.889 [2024-07-25 09:41:05.558577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.889 [2024-07-25 09:41:05.558592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.889 [2024-07-25 09:41:05.562261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.889 [2024-07-25 09:41:05.571492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.889 [2024-07-25 09:41:05.571933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.889 [2024-07-25 09:41:05.571984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.889 [2024-07-25 09:41:05.572001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.889 [2024-07-25 09:41:05.572241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.889 [2024-07-25 09:41:05.572496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.889 [2024-07-25 09:41:05.572521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.889 [2024-07-25 09:41:05.572536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.889 [2024-07-25 09:41:05.576117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.889 [2024-07-25 09:41:05.585421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.889 [2024-07-25 09:41:05.585875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.889 [2024-07-25 09:41:05.585930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.889 [2024-07-25 09:41:05.585948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.889 [2024-07-25 09:41:05.586187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.889 [2024-07-25 09:41:05.586443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.889 [2024-07-25 09:41:05.586467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.889 [2024-07-25 09:41:05.586482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.889 [2024-07-25 09:41:05.590068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.889 [2024-07-25 09:41:05.599385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.889 [2024-07-25 09:41:05.599870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.889 [2024-07-25 09:41:05.599919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.889 [2024-07-25 09:41:05.599937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.889 [2024-07-25 09:41:05.600176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.889 [2024-07-25 09:41:05.600432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.889 [2024-07-25 09:41:05.600456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.889 [2024-07-25 09:41:05.600471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.889 [2024-07-25 09:41:05.604052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.889 [2024-07-25 09:41:05.613350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.889 [2024-07-25 09:41:05.613841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.889 [2024-07-25 09:41:05.613889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:32.889 [2024-07-25 09:41:05.613907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:32.889 [2024-07-25 09:41:05.614146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:32.889 [2024-07-25 09:41:05.614400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.889 [2024-07-25 09:41:05.614424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.889 [2024-07-25 09:41:05.614440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.889 [2024-07-25 09:41:05.618017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.148 [2024-07-25 09:41:05.627350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.148 [2024-07-25 09:41:05.627794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.148 [2024-07-25 09:41:05.627825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.148 [2024-07-25 09:41:05.627843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.148 [2024-07-25 09:41:05.628081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.148 [2024-07-25 09:41:05.628324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.148 [2024-07-25 09:41:05.628347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.148 [2024-07-25 09:41:05.628375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.148 [2024-07-25 09:41:05.631960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.148 [2024-07-25 09:41:05.641261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.148 [2024-07-25 09:41:05.641732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.148 [2024-07-25 09:41:05.641763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.148 [2024-07-25 09:41:05.641787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.148 [2024-07-25 09:41:05.642027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.148 [2024-07-25 09:41:05.642270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.148 [2024-07-25 09:41:05.642294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.148 [2024-07-25 09:41:05.642308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.148 [2024-07-25 09:41:05.645902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.148 [2024-07-25 09:41:05.655216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.148 [2024-07-25 09:41:05.655651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.148 [2024-07-25 09:41:05.655683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.148 [2024-07-25 09:41:05.655700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.148 [2024-07-25 09:41:05.655939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.148 [2024-07-25 09:41:05.656182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.148 [2024-07-25 09:41:05.656206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.148 [2024-07-25 09:41:05.656221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.148 [2024-07-25 09:41:05.659819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.148 [2024-07-25 09:41:05.669114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.669548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.669579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.669596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.669834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.670078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.670101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.670116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.673706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.683007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.683474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.683505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.683523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.683762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.684010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.684034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.684049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.687642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.696942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.697397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.697428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.697446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.697685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.697928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.697952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.697967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.701560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.710858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.711327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.711374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.711394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.711633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.711876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.711899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.711914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.715505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.724805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.725305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.725336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.725354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.725607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.725850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.725874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.725889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.729476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.738779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.739220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.739250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.739268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.739518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.739762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.739786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.739801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.743391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.752714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.753201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.753231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.753249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.753501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.753745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.753768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.753784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.757370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.766638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.767098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.767129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.767148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.767399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.767643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.767667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.767682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.771263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.149 [2024-07-25 09:41:05.780568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.149 [2024-07-25 09:41:05.780998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.149 [2024-07-25 09:41:05.781028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.149 [2024-07-25 09:41:05.781051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.149 [2024-07-25 09:41:05.781292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.149 [2024-07-25 09:41:05.781547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.149 [2024-07-25 09:41:05.781571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.149 [2024-07-25 09:41:05.781586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.149 [2024-07-25 09:41:05.785168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.794478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.794934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.794964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.794982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.795221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.795475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.795500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.795515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.150 [2024-07-25 09:41:05.799096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.808400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.808864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.808895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.808912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.809150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.809406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.809430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.809446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.150 [2024-07-25 09:41:05.813025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.822324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.822785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.822816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.822834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.823072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.823315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.823345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.823373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.150 [2024-07-25 09:41:05.826958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.836264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.836687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.836718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.836736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.836974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.837217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.837240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.837255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.150 [2024-07-25 09:41:05.840845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.850162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.850652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.850705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.850723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.850962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.851205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.851229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.851244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.150 [2024-07-25 09:41:05.854835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.864137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.864616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.864668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.864686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.864924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.865167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.865190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.865205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.150 [2024-07-25 09:41:05.868794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.150 [2024-07-25 09:41:05.878091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.150 [2024-07-25 09:41:05.878564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.150 [2024-07-25 09:41:05.878595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.150 [2024-07-25 09:41:05.878613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.150 [2024-07-25 09:41:05.878851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.150 [2024-07-25 09:41:05.879094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.150 [2024-07-25 09:41:05.879117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.150 [2024-07-25 09:41:05.879132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.882727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.892031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.892456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.892487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.892504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.892744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.892986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.893010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.410 [2024-07-25 09:41:05.893025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.896615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.905913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.906385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.906416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.906433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.906672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.906915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.906938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.410 [2024-07-25 09:41:05.906953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.910545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.919843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.920319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.920381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.920400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.920645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.920888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.920912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.410 [2024-07-25 09:41:05.920927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.924520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.933828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.934307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.934367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.934387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.934626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.934868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.934892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.410 [2024-07-25 09:41:05.934907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.938500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.947822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.948299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.948350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.948381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.948621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.948874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.948898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.410 [2024-07-25 09:41:05.948913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.952504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.961826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.962242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.962296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.962318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.962617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.962914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.962943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.410 [2024-07-25 09:41:05.962972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.410 [2024-07-25 09:41:05.966569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.410 [2024-07-25 09:41:05.975890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.410 [2024-07-25 09:41:05.976272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.410 [2024-07-25 09:41:05.976303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.410 [2024-07-25 09:41:05.976321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.410 [2024-07-25 09:41:05.976585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.410 [2024-07-25 09:41:05.976830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.410 [2024-07-25 09:41:05.976853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:05.976868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:05.980461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:05.989780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:05.990215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:05.990268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:05.990286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:05.990538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:05.990783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:05.990806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:05.990822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:05.994435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.003763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.004187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.004250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.004267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.004518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.004763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.004786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.004801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.008415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.017740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.018141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.018178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.018196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.018446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.018690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.018714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.018729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.022314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.031641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.031988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.032033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.032048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.032257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.032496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.032527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.032542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.035670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.045061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.045429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.045459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.045475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.045723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.045929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.045950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.045964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.049160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.058523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.058885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.058924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.058938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.059148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.059385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.059417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.059431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.062507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.071910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.072243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.072268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.072283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.072531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.072771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.072792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.072804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.075909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.085308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.085738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.085778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.085793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.086002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.086201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.086220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.086233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.089257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.098778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.099169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.099208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.099222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.099462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.099699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.099734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.411 [2024-07-25 09:41:06.099747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.411 [2024-07-25 09:41:06.102813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.411 [2024-07-25 09:41:06.111970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.411 [2024-07-25 09:41:06.112383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.411 [2024-07-25 09:41:06.112424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.411 [2024-07-25 09:41:06.112439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.411 [2024-07-25 09:41:06.112668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.411 [2024-07-25 09:41:06.112867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.411 [2024-07-25 09:41:06.112886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.412 [2024-07-25 09:41:06.112899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.412 [2024-07-25 09:41:06.115886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.412 [2024-07-25 09:41:06.125206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.412 [2024-07-25 09:41:06.125654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.412 [2024-07-25 09:41:06.125694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.412 [2024-07-25 09:41:06.125708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.412 [2024-07-25 09:41:06.125918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.412 [2024-07-25 09:41:06.126117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.412 [2024-07-25 09:41:06.126136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.412 [2024-07-25 09:41:06.126148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.412 [2024-07-25 09:41:06.129136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.412 [2024-07-25 09:41:06.138533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.412 [2024-07-25 09:41:06.139006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.412 [2024-07-25 09:41:06.139034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.412 [2024-07-25 09:41:06.139049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.412 [2024-07-25 09:41:06.139264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.412 [2024-07-25 09:41:06.139530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.412 [2024-07-25 09:41:06.139551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.412 [2024-07-25 09:41:06.139564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.671 [2024-07-25 09:41:06.142936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.671 [2024-07-25 09:41:06.151859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.671 [2024-07-25 09:41:06.152236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.671 [2024-07-25 09:41:06.152276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.671 [2024-07-25 09:41:06.152294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.671 [2024-07-25 09:41:06.152539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.671 [2024-07-25 09:41:06.152764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.671 [2024-07-25 09:41:06.152783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.671 [2024-07-25 09:41:06.152795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.671 [2024-07-25 09:41:06.155781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.671 [2024-07-25 09:41:06.165289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.671 [2024-07-25 09:41:06.165727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.671 [2024-07-25 09:41:06.165752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.671 [2024-07-25 09:41:06.165781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.671 [2024-07-25 09:41:06.165977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.671 [2024-07-25 09:41:06.166176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.671 [2024-07-25 09:41:06.166196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.671 [2024-07-25 09:41:06.166208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.671 [2024-07-25 09:41:06.169196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.671 [2024-07-25 09:41:06.178541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.671 [2024-07-25 09:41:06.178999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.671 [2024-07-25 09:41:06.179038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.671 [2024-07-25 09:41:06.179053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.671 [2024-07-25 09:41:06.179248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.671 [2024-07-25 09:41:06.179476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.179497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.179510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.182497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.191877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.192303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.192341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.192364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.192583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.192800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.192825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.192838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.195824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.205103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.205530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.205577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.205593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.205804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.206004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.206023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.206035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.209066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.218350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.218820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.218859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.218875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.219070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.219268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.219288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.219300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.222288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.231589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.231982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.232021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.232034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.232244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.232473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.232494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.232507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.235494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.244834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.245200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.245226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.245241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.245464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.245684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.245704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.245716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.248704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.258162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.258498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.258524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.258540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.258755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.258954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.258974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.258986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.261981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.271437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.271825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.271864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.271878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.272087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.272286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.272305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.272318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.275304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.284860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.285226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.285253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.285267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.285502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.285725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.285745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.285757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.288808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 [2024-07-25 09:41:06.298239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.298631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.298678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.672 [2024-07-25 09:41:06.298693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.672 [2024-07-25 09:41:06.298888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.672 [2024-07-25 09:41:06.299086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.672 [2024-07-25 09:41:06.299106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.672 [2024-07-25 09:41:06.299118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.672 [2024-07-25 09:41:06.302127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 625512 Killed "${NVMF_APP[@]}" "$@" 00:26:33.672 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:33.672 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:33.672 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.672 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:33.672 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:33.672 [2024-07-25 09:41:06.311671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.672 [2024-07-25 09:41:06.312047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-07-25 09:41:06.312089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.312104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.312340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.312583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.312605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.312619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.315747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=626470 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 626470 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 626470 ']' 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.673 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:33.673 [2024-07-25 09:41:06.325181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.673 [2024-07-25 09:41:06.325550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-07-25 09:41:06.325577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.325593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.325831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.326038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.326058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.326071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.329281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 [2024-07-25 09:41:06.338909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.673 [2024-07-25 09:41:06.339270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-07-25 09:41:06.339298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.339314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.339553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.339787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.339809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.339822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.343174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 [2024-07-25 09:41:06.352491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.673 [2024-07-25 09:41:06.352867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-07-25 09:41:06.352906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.352921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.353137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.353365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.353393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.353408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.356573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 [2024-07-25 09:41:06.363116] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:33.673 [2024-07-25 09:41:06.363189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.673 [2024-07-25 09:41:06.366119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.673 [2024-07-25 09:41:06.366508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-07-25 09:41:06.366536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.366553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.366774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.366991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.367012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.367025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.370187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 [2024-07-25 09:41:06.379714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.673 [2024-07-25 09:41:06.380161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-07-25 09:41:06.380203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.380219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.380452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.380680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.380700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.380714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.383802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 [2024-07-25 09:41:06.393064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.673 [2024-07-25 09:41:06.393523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-07-25 09:41:06.393552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.673 [2024-07-25 09:41:06.393583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.673 [2024-07-25 09:41:06.393825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.673 [2024-07-25 09:41:06.394033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.673 [2024-07-25 09:41:06.394053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.673 [2024-07-25 09:41:06.394073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.673 [2024-07-25 09:41:06.397157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.673 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.933 [2024-07-25 09:41:06.406644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.407120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.407146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.407176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.407410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.407631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.407653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.407682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.410979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.420142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.420610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.420656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.420673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.420898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.421110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.421131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.421144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.424384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.430924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.933 [2024-07-25 09:41:06.433775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.434206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.434257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.434274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.434524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.434758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.434779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.434792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.437970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.447364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.447946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.447995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.448014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.448230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.448475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.448515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.448533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.451761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.460913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.461372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.461416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.461432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.461669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.461898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.461919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.461933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.465110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.474490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.474972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.475013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.475030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.475239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.475483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.475505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.475520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.478710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.488030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.488450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.488485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.488534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.488764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.488977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.488997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.489010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.492191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.501568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.502088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.502149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.933 [2024-07-25 09:41:06.502176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.933 [2024-07-25 09:41:06.502428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.933 [2024-07-25 09:41:06.502665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.933 [2024-07-25 09:41:06.502687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.933 [2024-07-25 09:41:06.502703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.933 [2024-07-25 09:41:06.505877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.933 [2024-07-25 09:41:06.515016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.933 [2024-07-25 09:41:06.515442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.933 [2024-07-25 09:41:06.515492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.515509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.515746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.515959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.515980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.515995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.519166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.528510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.528971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.529013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.529029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.529238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.529479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.529501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.529524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.532719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.542035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.542481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.542523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.542540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.542769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.542981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.543002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.543015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.546188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.549616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.934 [2024-07-25 09:41:06.549651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.934 [2024-07-25 09:41:06.549681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.934 [2024-07-25 09:41:06.549694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.934 [2024-07-25 09:41:06.549705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.934 [2024-07-25 09:41:06.549761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.934 [2024-07-25 09:41:06.549813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.934 [2024-07-25 09:41:06.549816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.934 [2024-07-25 09:41:06.555701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.556144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.556175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.556193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.556425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.556647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.556668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.556684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.559918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.569272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.569829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.569880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.569907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.570141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.570372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.570394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.570410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.573688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.582971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.583522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.583562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.583582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.583808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.584031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.584053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.584069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.587338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.596616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.597180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.597218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.597237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.597473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.597696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.597719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.597735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.600958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.610233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.610751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.610808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.610827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.611061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.611283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.611305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.611328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.614593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.623852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.624347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.624404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.624424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.624650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.624873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.624895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.934 [2024-07-25 09:41:06.624911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.934 [2024-07-25 09:41:06.628184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.934 [2024-07-25 09:41:06.637437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.934 [2024-07-25 09:41:06.637943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.934 [2024-07-25 09:41:06.637991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.934 [2024-07-25 09:41:06.638011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.934 [2024-07-25 09:41:06.638240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.934 [2024-07-25 09:41:06.638471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.934 [2024-07-25 09:41:06.638494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.935 [2024-07-25 09:41:06.638510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.935 [2024-07-25 09:41:06.641777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.935 [2024-07-25 09:41:06.651021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.935 [2024-07-25 09:41:06.651497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.935 [2024-07-25 09:41:06.651525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.935 [2024-07-25 09:41:06.651542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.935 [2024-07-25 09:41:06.651759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.935 [2024-07-25 09:41:06.651978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.935 [2024-07-25 09:41:06.651999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.935 [2024-07-25 09:41:06.652014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.935 [2024-07-25 09:41:06.655250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.935 [2024-07-25 09:41:06.664613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.935 [2024-07-25 09:41:06.665014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.935 [2024-07-25 09:41:06.665043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:33.935 [2024-07-25 09:41:06.665059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:33.935 [2024-07-25 09:41:06.665280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:33.935 [2024-07-25 09:41:06.665507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.935 [2024-07-25 09:41:06.665529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.935 [2024-07-25 09:41:06.665542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.193 [2024-07-25 09:41:06.668764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.193 [2024-07-25 09:41:06.678151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.193 [2024-07-25 09:41:06.678562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.193 [2024-07-25 09:41:06.678590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.193 [2024-07-25 09:41:06.678606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.193 [2024-07-25 09:41:06.678821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.193 [2024-07-25 09:41:06.679040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.193 [2024-07-25 09:41:06.679062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.193 [2024-07-25 09:41:06.679076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.193 [2024-07-25 09:41:06.682309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.193 [2024-07-25 09:41:06.691762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.193 [2024-07-25 09:41:06.692128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.193 [2024-07-25 09:41:06.692156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.193 [2024-07-25 09:41:06.692172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.193 [2024-07-25 09:41:06.692395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.193 [2024-07-25 09:41:06.692627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.193 [2024-07-25 09:41:06.692654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.193 [2024-07-25 09:41:06.692668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.193 [2024-07-25 09:41:06.695941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.193 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.193 [2024-07-25 09:41:06.704174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.193 [2024-07-25 09:41:06.705398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.193 [2024-07-25 09:41:06.705836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.193 [2024-07-25 09:41:06.705878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.193 [2024-07-25 09:41:06.705895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.194 [2024-07-25 09:41:06.706110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.194 [2024-07-25 09:41:06.706329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.194 [2024-07-25 09:41:06.706350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.194 [2024-07-25 09:41:06.706374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.194 [2024-07-25 09:41:06.709604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.194 [2024-07-25 09:41:06.718934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.194 [2024-07-25 09:41:06.719326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.194 [2024-07-25 09:41:06.719354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.194 [2024-07-25 09:41:06.719379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.194 [2024-07-25 09:41:06.719594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.194 [2024-07-25 09:41:06.719814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.194 [2024-07-25 09:41:06.719836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.194 [2024-07-25 09:41:06.719849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.194 [2024-07-25 09:41:06.723091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.194 [2024-07-25 09:41:06.732549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.194 [2024-07-25 09:41:06.733065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.194 [2024-07-25 09:41:06.733101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.194 [2024-07-25 09:41:06.733144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.194 [2024-07-25 09:41:06.733377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.194 [2024-07-25 09:41:06.733611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.194 [2024-07-25 09:41:06.733640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.194 [2024-07-25 09:41:06.733656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.194 [2024-07-25 09:41:06.736930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.194 Malloc0 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.194 [2024-07-25 09:41:06.746185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.194 [2024-07-25 09:41:06.746704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.194 [2024-07-25 09:41:06.746735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.194 [2024-07-25 09:41:06.746755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.194 [2024-07-25 09:41:06.746977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.194 [2024-07-25 09:41:06.747199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.194 [2024-07-25 09:41:06.747221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.194 [2024-07-25 09:41:06.747237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.194 [2024-07-25 09:41:06.750495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.194 [2024-07-25 09:41:06.759728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.194 [2024-07-25 09:41:06.760186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.194 [2024-07-25 09:41:06.760228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030ac0 with addr=10.0.0.2, port=4420 00:26:34.194 [2024-07-25 09:41:06.760248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030ac0 is same with the state(5) to be set 00:26:34.194 [2024-07-25 09:41:06.760475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030ac0 (9): Bad file descriptor 00:26:34.194 [2024-07-25 09:41:06.760695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.194 [2024-07-25 09:41:06.760717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.194 [2024-07-25 09:41:06.760731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.194 [2024-07-25 09:41:06.764129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.194 [2024-07-25 09:41:06.764768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.194 09:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 625746 00:26:34.194 [2024-07-25 09:41:06.773243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.452 [2024-07-25 09:41:06.933930] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:44.417 00:26:44.417 Latency(us) 00:26:44.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:44.417 Verification LBA range: start 0x0 length 0x4000 00:26:44.417 Nvme1n1 : 15.00 6917.80 27.02 9464.65 0.00 7789.75 594.68 17864.63 00:26:44.417 =================================================================================================================== 00:26:44.417 Total : 6917.80 27.02 9464.65 0.00 7789.75 594.68 17864.63 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.417 rmmod nvme_tcp 00:26:44.417 rmmod nvme_fabrics 00:26:44.417 rmmod nvme_keyring 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 626470 ']' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 626470 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 626470 ']' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 626470 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 626470 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 626470' 00:26:44.417 killing process with pid 626470 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 626470 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 626470 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.417 09:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:45.791 00:26:45.791 real 0m22.508s 00:26:45.791 user 1m0.273s 00:26:45.791 sys 0m4.475s 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.791 ************************************ 00:26:45.791 END TEST nvmf_bdevperf 00:26:45.791 ************************************ 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.791 ************************************ 00:26:45.791 START TEST nvmf_target_disconnect 00:26:45.791 ************************************ 00:26:45.791 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:46.049 * Looking for test storage... 00:26:46.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.049 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.050 09:41:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:47.949 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:47.949 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:47.949 Found net devices under 0000:82:00.0: cvl_0_0 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:47.949 Found net devices under 0000:82:00.1: cvl_0_1 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:47.949 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:47.950 00:26:47.950 --- 10.0.0.2 ping statistics --- 00:26:47.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.950 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:47.950 00:26:47.950 --- 10.0.0.1 ping statistics --- 00:26:47.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.950 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:47.950 ************************************ 00:26:47.950 START TEST nvmf_target_disconnect_tc1 00:26:47.950 ************************************ 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:47.950 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:48.208 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.208 [2024-07-25 09:41:20.742748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.208 [2024-07-25 09:41:20.742817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122b1a0 with addr=10.0.0.2, port=4420 00:26:48.208 [2024-07-25 09:41:20.742852] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:48.208 [2024-07-25 09:41:20.742883] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:48.208 [2024-07-25 09:41:20.742897] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:48.208 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:48.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:48.208 Initializing NVMe Controllers 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.208 00:26:48.208 real 0m0.098s 00:26:48.208 user 0m0.040s 00:26:48.208 sys 0m0.057s 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:48.208 ************************************ 00:26:48.208 END TEST nvmf_target_disconnect_tc1 00:26:48.208 ************************************ 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:48.208 ************************************ 00:26:48.208 START TEST nvmf_target_disconnect_tc2 00:26:48.208 ************************************ 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=629523 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 629523 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 629523 ']' 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.208 09:41:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.208 [2024-07-25 09:41:20.854613] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:48.208 [2024-07-25 09:41:20.854732] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.208 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.208 [2024-07-25 09:41:20.919369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.466 [2024-07-25 09:41:21.033853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.466 [2024-07-25 09:41:21.033909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.466 [2024-07-25 09:41:21.033938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.466 [2024-07-25 09:41:21.033951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.466 [2024-07-25 09:41:21.033961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.466 [2024-07-25 09:41:21.034055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:48.466 [2024-07-25 09:41:21.034318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:48.466 [2024-07-25 09:41:21.034382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:48.466 [2024-07-25 09:41:21.034386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.466 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 Malloc0 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 [2024-07-25 09:41:21.209111] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 [2024-07-25 09:41:21.237376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=629645 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:48.724 09:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:48.724 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.629 09:41:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 629523 00:26:50.629 09:41:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Read completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Write completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Write completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.629 Write completed with error (sct=0, sc=8) 00:26:50.629 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 [2024-07-25 09:41:23.261582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 [2024-07-25 09:41:23.261888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 [2024-07-25 09:41:23.262219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Read completed with error (sct=0, sc=8) 00:26:50.630 starting I/O failed 00:26:50.630 Write completed with error (sct=0, sc=8) 00:26:50.631 starting I/O failed 00:26:50.631 Write completed with error (sct=0, sc=8) 00:26:50.631 starting I/O failed 00:26:50.631 Write completed with error (sct=0, sc=8) 00:26:50.631 starting I/O failed 00:26:50.631 [2024-07-25 09:41:23.262585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.631 [2024-07-25 09:41:23.262756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.262801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.262946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.262976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.263157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.263212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.263375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.263420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.263552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.263577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.263706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.263745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.263928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.263955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.264117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.264144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.264319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.264347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.264507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.264532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.264680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.264718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.264913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.264936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.265141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.265188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.265365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.265409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.265515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.265540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.265735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.265762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.265893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.265920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.266164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.266219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.266369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.266425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.266560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.266585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.266759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.266796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.266960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.267011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.267173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.267201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.267416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.267452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.267575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.267600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.267776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.267803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.267903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.267926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.268113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.268141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.268249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.268276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.268469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.268509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.268722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.268769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.268978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.269025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.269152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.269198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.269379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.269423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.269570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.269594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.269757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.631 [2024-07-25 09:41:23.269803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.631 qpair failed and we were unable to recover it. 00:26:50.631 [2024-07-25 09:41:23.270053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.270097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.270285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.270312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.270466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.270506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.270712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.270748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.270943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.270986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.271230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.271280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.271436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.271467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.271575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.271600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.271791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.271833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.272016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.272063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.272296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.272319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.272504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.272530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.272684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.272708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.272932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.272956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.273215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.273261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.273477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.273503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.273652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.273698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.273826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.273868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.274073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.274096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.274266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.274299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.274461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.274487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.274657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.274683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.274841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.274865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.275035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.275059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.275277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.275302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.275448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.275475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.275617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.275642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.275753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.275791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.275982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.276023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.276254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.276278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.276466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.276492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.276712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.276740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.276979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.277022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.277255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.277278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.277471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.277515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.277720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.277761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.277957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.277999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.632 qpair failed and we were unable to recover it. 00:26:50.632 [2024-07-25 09:41:23.278157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.632 [2024-07-25 09:41:23.278180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.278360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.278384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.278548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.278573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.278778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.278821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.279025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.279075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.279298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.279322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.279571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.279596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.279786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.279828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.279934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.279975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.280166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.280221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.280370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.280408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.280573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.280615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.280769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.280812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.280966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.281008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.281159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.281197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.281422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.281447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.281627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.281667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.281891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.281931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.282078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.282101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.282264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.282303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.282518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.282550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.282768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.282811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.282977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.283018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.283202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.283225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.283367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.283406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.283625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.283665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.283838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.283878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.284090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.284132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.284266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.284289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.284408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.284438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.284595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.284637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.284806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.284848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.285012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.285052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.285262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.285285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.285475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.285523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.285749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.285789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.285986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.286027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.633 [2024-07-25 09:41:23.286171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.633 [2024-07-25 09:41:23.286198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.633 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.286404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.286447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.286610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.286652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.286823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.286863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.286968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.287010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.287179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.287203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.287342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.287394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.287628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.287656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.287894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.287936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.288097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.288120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.288378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.288402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.288579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.288607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.288804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.288852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.289057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.289098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.289290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.289312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.289520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.289562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.289775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.289802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.289980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.290021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.290194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.290217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.290350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.290389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.290527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.290569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.290741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.290783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.290919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.290953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.291189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.291213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.291425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.291449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.291609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.291631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.291747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.291786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.291949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.291986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.292117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.292153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.292315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.292353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.634 [2024-07-25 09:41:23.292538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.634 [2024-07-25 09:41:23.292562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.634 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.292714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.292737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.292874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.292912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.293124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.293161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.293368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.293615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.293643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.293884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.293925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.294125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.294166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.294370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.294393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.294597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.294629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.294816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.294858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.294986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.295014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.295240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.295263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.295472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.295515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.295695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.295736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.296009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.296051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.296209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.296232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.296419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.296458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.296661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.296704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.296880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.296921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.297121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.297161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.297334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.297380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.297544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.297590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.297753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.297798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.298014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.298055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.298285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.298309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.298498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.298522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.298682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.298725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.298881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.298923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.299075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.299117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.299253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.299291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.299446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.299472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.299684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.299726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.299921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.299963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.300134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.300176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.300398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.300423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.300605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.300647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.300876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.300917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.635 qpair failed and we were unable to recover it. 00:26:50.635 [2024-07-25 09:41:23.301111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.635 [2024-07-25 09:41:23.301152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.301382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.301406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.301525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.301549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.301765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.301806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.301984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.302025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.302190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.302212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.302435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.302477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.302697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.302737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.302934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.302975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.303215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.303238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.303488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.303517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.303760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.303801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.303935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.303977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.304160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.304183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.304411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.304435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.304587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.304630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.304851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.304892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.305062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.305104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.305233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.305271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.305416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.305445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.305666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.305705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.305850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.305891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.305991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.306015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.306157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.306181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.306295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.306322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.306537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.306561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.306717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.306747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.306939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.306962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.307111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.307134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.307238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.307261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.307394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.307419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.307662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.307703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.307880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.307922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.308067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.308105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.308231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.308263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.308442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.308488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.308656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.636 [2024-07-25 09:41:23.308697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.636 qpair failed and we were unable to recover it. 00:26:50.636 [2024-07-25 09:41:23.308802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.308844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.309001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.309025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.309233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.309270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.309395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.309420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.309571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.309616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.309828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.310029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.310052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.310276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.310314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.310552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.310593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.310800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.310841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.311066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.311107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.311258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.311281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.311443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.311487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.311642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.311683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.311914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.311956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.312174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.312215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.312374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.312413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.312598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.312639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.312799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.312841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.312986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.313014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.313224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.313247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.313447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.313486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.313684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.313725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.313945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.313985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.314140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.314163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.314363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.314386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.314618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.314659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.314885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.314930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.315130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.315171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.315333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.315386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.315532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.315557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.315747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.315788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.315894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.315936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.316055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.316083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.316317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.316354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.316573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.316614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.316740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.316781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.637 [2024-07-25 09:41:23.316915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.637 [2024-07-25 09:41:23.316943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.637 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.317166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.317206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.317364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.317406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.317595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.317636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.317825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.317868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.318081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.318123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.318259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.318282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.318425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.318467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.318634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.318675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.318800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.318828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.319015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.319057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.319257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.319280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.319458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.319500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.319674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.319715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.319904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.319945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.320161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.320183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.320332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.320360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.320579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.320622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.320755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.320783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.320954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.320996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.321129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.321167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.321306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.321330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.321558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.321586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.321767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.321809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.321962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.322003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.322219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.322242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.322421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.322450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.322677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.322718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.322910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.322949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.323086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.323109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.323235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.323263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.323462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.323505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.323657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.323694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.323869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.323911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.324072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.324095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.324214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.324238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.324408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.324436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.324600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.324628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.638 qpair failed and we were unable to recover it. 00:26:50.638 [2024-07-25 09:41:23.324820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.638 [2024-07-25 09:41:23.324862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.325053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.639 [2024-07-25 09:41:23.325076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.325213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.639 [2024-07-25 09:41:23.325251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.325414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.639 [2024-07-25 09:41:23.325438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.325618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.639 [2024-07-25 09:41:23.325642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.325873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.639 [2024-07-25 09:41:23.325895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.326042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.639 [2024-07-25 09:41:23.326065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.639 qpair failed and we were unable to recover it. 00:26:50.639 [2024-07-25 09:41:23.326216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.326254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.326434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.326457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.326654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.326677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.326816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.326857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.327026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.327049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.327247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.327270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.327517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.327558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.327771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.327812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.327994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.328036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.328179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.328201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.328340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.328388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.328582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.328627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.328840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.328881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.329029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.329057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.329287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.329309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.329520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.329562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.329697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.329738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.329972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.330013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.330164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.330187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.330353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.330407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.330594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.330635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.330838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.330880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.331023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.331052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.331274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.331296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.331497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.331522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.331729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.331773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.331956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.331997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.332181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.332204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.332363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.332404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.332584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.332627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.332838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.332879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.333067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.333108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.333282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.333305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.333429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.333454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.333636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.333676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.640 [2024-07-25 09:41:23.333851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.640 [2024-07-25 09:41:23.333892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.640 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.334048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.334090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.334206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.334230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.334470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.334499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.334749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.334792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.334989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.335030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.335181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.335204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.335407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.335431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.335650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.335673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.335851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.335892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.336085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.336125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.336297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.336320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.336540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.336582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.336751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.336793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.336930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.336958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.337093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.337117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.337305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.337328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.337496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.337525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.337769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.337812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.338025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.338066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.338285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.338307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.338545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.338587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.338839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.338880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.339116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.339157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.339381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.339407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.339567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.339591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.339808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.339850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.340082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.340124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.340275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.340298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.340422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.340446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.340619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.340665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.340889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.340931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.341114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.341156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.341381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.341405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.341564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.341587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.341813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.341854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.342021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.342064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.342215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.641 [2024-07-25 09:41:23.342238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.641 qpair failed and we were unable to recover it. 00:26:50.641 [2024-07-25 09:41:23.342382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.342412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.342568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.342596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.342830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.342872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.343056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.343097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.343266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.343289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.343490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.343532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.343762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.343803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.344007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.344048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.344275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.344298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.344476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.344500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.344647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.344689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.344824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.344852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.345031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.345059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.345289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.345312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.345504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.345546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.345752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.345794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.345928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.345970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.346179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.346236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.346454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.346497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.346711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.346753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.346908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.346950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.347123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.347146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.347297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.347334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.347530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.347554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.347716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.347757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.347950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.347991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.348174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.348197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.348440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.348469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.348735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.348777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.348930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.348972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.349150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.349173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.349374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.349398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.349597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.349644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.349816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.349858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.350013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.350056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.350165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.350189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.350326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.350371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.642 [2024-07-25 09:41:23.350592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.642 [2024-07-25 09:41:23.350634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.642 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.350760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.350801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.350962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.351002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.351204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.351227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.351367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.351392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.351575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.351600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.351764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.351786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.351940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.351963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.352201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.352224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.352484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.352528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.352732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.352758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.352983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.353008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.353231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.353256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.353513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.353542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.353710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.353738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.353927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.353968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.354094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.354132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.354316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.354352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.354572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.354615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.354745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.354773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.354891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.354919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.355078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.355109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.355255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.355295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.355473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.355516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.355704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.355746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.355960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.356000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.356172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.356195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.356388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.356432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.356639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.356681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.643 [2024-07-25 09:41:23.356805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.643 [2024-07-25 09:41:23.356847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.643 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.357020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.357061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.357289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.357445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.357474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.357649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.357690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.357855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.357896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.358091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.358119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.358365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.358391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.358617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.358645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.358877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.358920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.359067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.916 [2024-07-25 09:41:23.359091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.916 qpair failed and we were unable to recover it. 00:26:50.916 [2024-07-25 09:41:23.359254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.359279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.359512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.359555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.359709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.359732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.359887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.359928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.360092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.360132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.360296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.360319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.360550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.360592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.360792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.360834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.361017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.361058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.361267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.361290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.361440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.361482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.361706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.361747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.361927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.361968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.362178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.362220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.362429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.362454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.362592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.362644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.362787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.362814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.363003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.363027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.363142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.363167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.363408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.363437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.363669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.363692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.363921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.363961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.364113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.364136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.364376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.364401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.364595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.364635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.364761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.364789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.364987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.365027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.365212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.365241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.365381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.917 [2024-07-25 09:41:23.365419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.917 qpair failed and we were unable to recover it. 00:26:50.917 [2024-07-25 09:41:23.365595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.365638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.365850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.365891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.366061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.366103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.366314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.366337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.366530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.366571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.366760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.366802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.366986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.367030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.367191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.367213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.367439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.367481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.367696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.367738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.367929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.367969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.368137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.368160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.368396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.368420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.368658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.368686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.368871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.368912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.369072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.369114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.369202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.369225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.369395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.369421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.369564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.369592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.369786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.369826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.370028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.370070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.370291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.370313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.370518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.370560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.370689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.370717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.370916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.370958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.371144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.371184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.371395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.371420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.371576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.371616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.371844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.371882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.372017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.372058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.372225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.372248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.372485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.372526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.372760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.372800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.373032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.373073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.373255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.373277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.373451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.373493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.373649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.373690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.918 [2024-07-25 09:41:23.373878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.918 [2024-07-25 09:41:23.373919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.918 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.374102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.374142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.374341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.374383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.374603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.374645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.374867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.374909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.375031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.375072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.375195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.375219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.375409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.375434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.375605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.375650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.375808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.375848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.376068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.376091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.376376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.376400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.376616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.376657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.376790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.376817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.377005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.377046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.377168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.377195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.377394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.377435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.377578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.377624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.377780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.377822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.377998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.378039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.378187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.378209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.378377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.378417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.378656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.378697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.378852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.378895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.379051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.379096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.379317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.379355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.379585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.379627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.379757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.379799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.379949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.379977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.919 [2024-07-25 09:41:23.380208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.919 [2024-07-25 09:41:23.380232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.919 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.380408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.380447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.380582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.380623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.380855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.380896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.381050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.381092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.381305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.381328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.381569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.381610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.381785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.381830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.382012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.382053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.382284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.382307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.382455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.382501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.382646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.382686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.382842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.382882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.383032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.383073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.383188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.383212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.383409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.383450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.383684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.383707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.383900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.383923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.384132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.384155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.384320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.384343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.384505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.384547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.384782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.384824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.384953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.384985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.385156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.385195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.385298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.385322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.385504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.385556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.385759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.385800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.385962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.386003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.386218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.386240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.386419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.386461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.386636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.386678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.386884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.386924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.387123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.387165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.387323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.387345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.387556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.387598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.387824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.387865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.388062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.920 [2024-07-25 09:41:23.388102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.920 qpair failed and we were unable to recover it. 00:26:50.920 [2024-07-25 09:41:23.388309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.388333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.388545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.388587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.388809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.388850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.389031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.389073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.389269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.389292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.389452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.389476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.389699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.389740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.389980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.390021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.390172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.390195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.390335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.390382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.390590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.390635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.390808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.390850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.391041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.391082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.391295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.391318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.391472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.391515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.391718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.391759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.391898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.391926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.392143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.392185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.392368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.392406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.392583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.392606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.392788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.392830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.393000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.393040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.393218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.393241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.393448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.393490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.393675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.393715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.393935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.393977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.394190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.394217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.394454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.394496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.394708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.394750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.921 qpair failed and we were unable to recover it. 00:26:50.921 [2024-07-25 09:41:23.394886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.921 [2024-07-25 09:41:23.394913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.395131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.395172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.395295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.395342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.395584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.395625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.395790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.395839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.396070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.396111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.396393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.396417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.396604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.396650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.396796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.396838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.396988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.397016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.397225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.397248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.397386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.397426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.397551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.397579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.397794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.397835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.398043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.398083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.398300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.398323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.398528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.398570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.398788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.398829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.398998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.399040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.399233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.399255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.399450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.399492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.399680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.399725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.399903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.399944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.400154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.400195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.400343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.400396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.400647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.400689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.400915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.400957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.401095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.401122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.401291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.401329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.401501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.401542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.401656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.401684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.401863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.401905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.402032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.402060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.402201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.402225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.402442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.402481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.402685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.402708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.402936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.922 [2024-07-25 09:41:23.402959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.922 qpair failed and we were unable to recover it. 00:26:50.922 [2024-07-25 09:41:23.403190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.403212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.403367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.403391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.403581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.403626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.403828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.403870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.404057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.404098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.404276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.404299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.404495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.404538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.404657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.404699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.404880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.404920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.405164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.405204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.405376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.405416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.405659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.405701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.405837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.405860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.406097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.406139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.406292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.406315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.406555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.406597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.406818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.406859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.407040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.407081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.407245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.407268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.407504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.407546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.407706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.407754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.407917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.407959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.408087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.408135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.408284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.408307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.408544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.408592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.408754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.408794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.408934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.408961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.409167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.409190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.409325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.409348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.409564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.409588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.409754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.409794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.409971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.410013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.410211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.410233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.410406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.410448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.410557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.410599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.410803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.410844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.411083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.411123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.923 [2024-07-25 09:41:23.411350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.923 [2024-07-25 09:41:23.411380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.923 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.411552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.411594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.411705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.411746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.411898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.411926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.412078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.412106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.412236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.412259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.412405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.412436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.412673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.412697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.412872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.412894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.413089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.413112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.413334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.413377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.413562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.413603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.413785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.413827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.414024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.414066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.414295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.414317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.414508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.414537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.414764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.414806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.415056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.415096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.415291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.415314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.415459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.415483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.415653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.415694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.415835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.415863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.416016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.416044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.416264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.416287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.416470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.416512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.416721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.416763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.416903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.416943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.417187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.417232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.417369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.417392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.417590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.417632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.417823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.417863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.418091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.418133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.418266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.418289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.418431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.418473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.418598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.418626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.418777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.418805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.419023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.419065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.419263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.419286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.924 qpair failed and we were unable to recover it. 00:26:50.924 [2024-07-25 09:41:23.419497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.924 [2024-07-25 09:41:23.419539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.419718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.419760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.419911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.419952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.420130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.420152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.420266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.420290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.420546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.420588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.420734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.420776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.421005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.421047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.421216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.421244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.421457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.421500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.421716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.421757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.421929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.421970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.422152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.422174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.422350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.422379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.422563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.422605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.422782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.422823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.423011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.423052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.423208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.423230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.423470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.423512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.423626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.423668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.423895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.423937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.424100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.424140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.424369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.424392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.424595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.424622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.424777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.424805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.424921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.424949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.425134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.425159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.425347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.425392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.425563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.425605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.425816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.425862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.426082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.426121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.426308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.426332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.426578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.426602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.426760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.426801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.426946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.426989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.427102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.427145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.427380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.427404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.427543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.925 [2024-07-25 09:41:23.427583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.925 qpair failed and we were unable to recover it. 00:26:50.925 [2024-07-25 09:41:23.427742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.427783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.427917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.427960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.428134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.428176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.428370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.428409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.428624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.428665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.428887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.428929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.429081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.429122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.429247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.429271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.429467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.429509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.429675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.429716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.429885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.429927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.430111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.430151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.430382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.430406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.430619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.430659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.430791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.430819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.431014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.431056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.431260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.431283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.431519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.431559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.431790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.431833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.432028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.432070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.432270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.432293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.432435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.432466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.432689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.432731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.432945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.432985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.433172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.433196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.433380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.433403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.433548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.433589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.433743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.433783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.433952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.433993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.434209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.434250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.434476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.434517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.434706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.434751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.926 [2024-07-25 09:41:23.434886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.926 [2024-07-25 09:41:23.434914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.926 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.435122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.435319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.435342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.435468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.435507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.435713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.435754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.435926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.435967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.436196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.436237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.436388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.436412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.436562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.436603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.436810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.436854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.436994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.437022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.437193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.437231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.437369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.437393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.437627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.437668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.437847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.437889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.438116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.438157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.438286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.438309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.438550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.438590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.438751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.438792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.439002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.439042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.439261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.439504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.439545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.439684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.439711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.439922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.439963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.440118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.440160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.440287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.440310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.440553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.440596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.440747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.440789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.440986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.441026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.441210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.441233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.441446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.441491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.441658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.441682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.441818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.441859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.442025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.442064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.442206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.442243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.442436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.442459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.442624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.442647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.442875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.927 [2024-07-25 09:41:23.442898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.927 qpair failed and we were unable to recover it. 00:26:50.927 [2024-07-25 09:41:23.443173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.443196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.443383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.443410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.443604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.443645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.443872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.443913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.444111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.444133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.444283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.444306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.444458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.444508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.444698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.444739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.444918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.444959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.445123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.445146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.445361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.445385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.445591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.445633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.445805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.445848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.446001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.446046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.446259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.446282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.446554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.446597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.446816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.446857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.447017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.447059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.447238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.447261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.447452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.447495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.447664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.447705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.447853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.447902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.448058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.448102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.448379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.448403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.448595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.448635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.448801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.448843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.449074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.449115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.449265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.449288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.449525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.449566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.449797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.449839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.450001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.450048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.450211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.450234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.450424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.450467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.450690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.450732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.450913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.450954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.451106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.451129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.451260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.928 [2024-07-25 09:41:23.451283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.928 qpair failed and we were unable to recover it. 00:26:50.928 [2024-07-25 09:41:23.451491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.451535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.451709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.451749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.451899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.451941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.452093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.452131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.452314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.452342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.452566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.452609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.452763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.452805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.453008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.453049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.453189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.453212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.453367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.453392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.453581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.453622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.453829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.453870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.454036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.454079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.454258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.454281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.454511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.454554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.454773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.454813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.454992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.455033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.455242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.455265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.455446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.455492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.455650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.455694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.455870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.455911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.456070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.456112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.456245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.456283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.456435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.456460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.456606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.456647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.456816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.456839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.456970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.457012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.457199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.457221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.457395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.457418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.457637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.457660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.457892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.457934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.458108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.458140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.458269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.458292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.458526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.458567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.458756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.458798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.459010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.459052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.459233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.459255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.929 [2024-07-25 09:41:23.459437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.929 [2024-07-25 09:41:23.459466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.929 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.459595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.459638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.459833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.459875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.460037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.460077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.460317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.460353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.460489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.460517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.460703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.460886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.460930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.461053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.461077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.461211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.461235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.461368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.461393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.461521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.461545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.461726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.461749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.461856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.461893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.462052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.462076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.462211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.462235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.462374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.462414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.462566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.462594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.462769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.462811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.462954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.462978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.463144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.463183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.463283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.463321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.463471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.463500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.463650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.463678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.463815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.464018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.464042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.464196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.464233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.464405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.464444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.464589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.464612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.464736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.464776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.464945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.464968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.465114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.465137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.465278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.465316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.465470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.465513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.465640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.465669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.465851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.465879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.466012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.466035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.466150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.466174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.930 qpair failed and we were unable to recover it. 00:26:50.930 [2024-07-25 09:41:23.466316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.930 [2024-07-25 09:41:23.466355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.466469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.466492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.466630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.466669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.466817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.466856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.467168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.467330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.467494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.467841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.467988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.468030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.468204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.468227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.468331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.468371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.468543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.468567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.468704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.468729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.468825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.468848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.468989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.469013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.469160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.469197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.469344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.469376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.469554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.469582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.469736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.469778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.469945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.469968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.470115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.470138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.470278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.470316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.470516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.470546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.470684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.470726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.470842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.470884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.471030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.471177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.471330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.471481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.471636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.931 [2024-07-25 09:41:23.471797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.931 qpair failed and we were unable to recover it. 00:26:50.931 [2024-07-25 09:41:23.471926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.471949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.472087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.472110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.472228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.472252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.472407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.472432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.472567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.472591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.472693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.472717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.472854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.473042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.473080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.473211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.473235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.473346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.473376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.473550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.473573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.473743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.473767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.473903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.473940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.474064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.474088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.474252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.474275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.474445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.474470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.474614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.474659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.474798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.474839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.475004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.475027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.475185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.475223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.475368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.475393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.475541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.475569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.475701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.475739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.475867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.475891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.476000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.476023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.476170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.476194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.476390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.476415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.476550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.476740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.476763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.476882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.476904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.477068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.477229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.477406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.477535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.477696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.477858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.477985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.932 [2024-07-25 09:41:23.478008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.932 qpair failed and we were unable to recover it. 00:26:50.932 [2024-07-25 09:41:23.478153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.478176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.478369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.478392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.478529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.478553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.478678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.478702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.478867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.478905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.479858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.479882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.480043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.480067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.480185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.480209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.480323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.480346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.480488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.480514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.480678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.480702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.480851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.480874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.481041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.481189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.481322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.481523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.481697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.481878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.481995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.482033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.482162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.482186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.482328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.482352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.482544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.482567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.482701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.482725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.482837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.482862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.483034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.483058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.483186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.483209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.483376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.483415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.483544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.483573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.483710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.483751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.483862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.483886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.484017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.484040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.933 qpair failed and we were unable to recover it. 00:26:50.933 [2024-07-25 09:41:23.484159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.933 [2024-07-25 09:41:23.484182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.484328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.484376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.484537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.484561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.484695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.484732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.484841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.484865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.484951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.484974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.485117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.485140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.485249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.485273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.485380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.485405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.485551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.485579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.485702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.485740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.485858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.485882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.486867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.486890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.487965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.487990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.488162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.488200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.488318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.488372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.488467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.488492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.488649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.488673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.488844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.488866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.488994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.489018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.489191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.489228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.489378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.489402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.489571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.489612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.489724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.489752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.489889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.489932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.490044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.934 [2024-07-25 09:41:23.490069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.934 qpair failed and we were unable to recover it. 00:26:50.934 [2024-07-25 09:41:23.490226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.490250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.490375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.490400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.490519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.490543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.490679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.490717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.490860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.490898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.490995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.491025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.491197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.491221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.491337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.491383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.491557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.491599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.491730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.491772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.491898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.491935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.492057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.492084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.492213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.492237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.492367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.492406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.492505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.492528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.492696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.492733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.492880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.492903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.493036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.493059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.493223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.493260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.493395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.493419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.493566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.493590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.493745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.493773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.493937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.493960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.494116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.494140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.494247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.494270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.494412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.494456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.494630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.494672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.494826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.494867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.494981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.495019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.495148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.495172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.495301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.495325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.495435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.495459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.935 [2024-07-25 09:41:23.495605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.935 [2024-07-25 09:41:23.495629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.935 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.495753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.495777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.495899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.495923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.496049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.496072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.496204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.496228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.496308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.496331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.496508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.496533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.496671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.496826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.496864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.497958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.497982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.498120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.498144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.498336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.498365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.498525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.498723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.498749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.498850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.498874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.499012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.499035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.499207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.499231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.499374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.499399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.499501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.499544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.499676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.499703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.499886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.499913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.500022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.500046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.500193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.500216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.500374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.500398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.500488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.500512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.500673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.500711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.500865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.500888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.501051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.501088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.501183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.501207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.501338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.501383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.501501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.501542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-07-25 09:41:23.501703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.936 [2024-07-25 09:41:23.501744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.501897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.501920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.502036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.502215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.502406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.502549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.502728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.502846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.502976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.503138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.503292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.503457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.503601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.503798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.503947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.503985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.504143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.504181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.504351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.504425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.504572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.504595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.504726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.504763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.504900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.504939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.505084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.505123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.505242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.505265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.505396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.505426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.505565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.505603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.505730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.505768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.505923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.505946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.506147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.506297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.506423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.506542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.506727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.506860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.506987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.507011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.507155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.507179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.507319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.507363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.507501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.507525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.507740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.507764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-07-25 09:41:23.507985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.937 [2024-07-25 09:41:23.508008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.508217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.508240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.508442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.508467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.508593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.508616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.508817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.508840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.509069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.509110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.509272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.509295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.509475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.509518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.509641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.509683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.509860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.509883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.510019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.510069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.510259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.510282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.510486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.510527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.510788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.510829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.511053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.511094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.511275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.511298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.511476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.511505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.511738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.511779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.512340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.512406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.512562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.512606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.512746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.512774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.512973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.513013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.513213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.513241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.513833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.513862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.514108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.514162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.514408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.514437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.514823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.514873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.515127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.515169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.515341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.515391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.515551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.515578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.515754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.515796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.516005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.516053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.516257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.516283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.516455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.516482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.516647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.516692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.516936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.516979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.517168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.517193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.517423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.517474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-07-25 09:41:23.517699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.938 [2024-07-25 09:41:23.517741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.518002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.518044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.518194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.518221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.518392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.518423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.518581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.518635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.518796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.518839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.518990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.519032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.519281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.519306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.519455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.519500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.519657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.519699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.519848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.519878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.520011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.520054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.520209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.520236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.520422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.520450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.520544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.520585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.520741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.520782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.520947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.520973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.521219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.521243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.521467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.521495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.521676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.521728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.521921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.521962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.522114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.522140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.522255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.522280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.522422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.522466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.522642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.522687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.522884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.522912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.523092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.523117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.523301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.523346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.523512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.523555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.523723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.523770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.523955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.523999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.524240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.524273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.524480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.524524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.524658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.524704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.524853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.524896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.525037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.525081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.525225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.525256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.525479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.939 [2024-07-25 09:41:23.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.939 qpair failed and we were unable to recover it. 00:26:50.939 [2024-07-25 09:41:23.525628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.525678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.525839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.525881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.526050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.526092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.526227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.526252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.526430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.526474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.526611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.526660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.526816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.526843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.526982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.527009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.527179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.527205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.527384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.527420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.527534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.527577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.527679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.527709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.527881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.527906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.528094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.528121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.528279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.528320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.528460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.528486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.528649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.528685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.528903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.528945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.529106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.529132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.529244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.529287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.529439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.529485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.529610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.529638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.529808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.529832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.529972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.530013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.530142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.530169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.530323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.530503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.530529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.530709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.530736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.530861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.530886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.531071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.531124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.531290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.531318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.531444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.531471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.531609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.940 [2024-07-25 09:41:23.531650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.940 qpair failed and we were unable to recover it. 00:26:50.940 [2024-07-25 09:41:23.531751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.531798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.531933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.531960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.532123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.532148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.532326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.532351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.532499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.532530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.532712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.532738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.532904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.532930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.533088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.533118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.533320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.533347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.533495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.533540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.533650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.533692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.533821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.533851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.533973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.534001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.534138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.534171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.534279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.534303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.534473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.534501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.534621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.534647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.534769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.534812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.534984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.535026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.535180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.535215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.535305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.535339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.535495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.535522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.535642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.535685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.535929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.535956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.536131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.536157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.536391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.536432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.536543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.536572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.536779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.536823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.537008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.537063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.537297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.537324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.537473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.537516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.537628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.537671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.537809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.537835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.538030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.538093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.538263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.538289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.538439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.538465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.538578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.538626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.538739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.538785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.538994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.539019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.539134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.539160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.941 qpair failed and we were unable to recover it. 00:26:50.941 [2024-07-25 09:41:23.539328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.941 [2024-07-25 09:41:23.539368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.539469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.539497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.539636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.539663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.539836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.539860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.540002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.540028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.540186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.540211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.540369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.540402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.540531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.540580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.540745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.540789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.540933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.540978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.541132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.541156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.541325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.541352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.541484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.541510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.541611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.541664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.541819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.541858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.542011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.542038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.542199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.542225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.542381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.542432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.542556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.542587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.542737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.542764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.542905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.542932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.543079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.543103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.543224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.543268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.543390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.543428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.543525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.543552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.543746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.543770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.543920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.543964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.544116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.544147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.544318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.544343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.544501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.544529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.544650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.544676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.544813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.544854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.544987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.545031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.545221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.545247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.545391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.545429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.545518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.545544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.545690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.545720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.545855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.545880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.546044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.546071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.546211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.546254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.546433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.942 [2024-07-25 09:41:23.546461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.942 qpair failed and we were unable to recover it. 00:26:50.942 [2024-07-25 09:41:23.546551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.546578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.546756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.546794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.546955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.546981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.547153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.547179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.547343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.547375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.547518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.547562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.547728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.547776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.547927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.547972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.548103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.548129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.548317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.548365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.548482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.548509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.548646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.548673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.548798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.548838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.548962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.548989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.549137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.549162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.549309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.549335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.549523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.549551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.549743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.549768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.549954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.549979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.550099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.550123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.550246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.550272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.550435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.550477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.550614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.550667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.550842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.550878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.551064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.551123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.551303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.551328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.551459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.551499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.551629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.551674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.551781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.551821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.551917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.551960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.552098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.552123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.552263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.552290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.552406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.552431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.552565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.552591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.552739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.552910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.552940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.553074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.553099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.553276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.553301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.553419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.553446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.553562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.553607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.553717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.943 [2024-07-25 09:41:23.553741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.943 qpair failed and we were unable to recover it. 00:26:50.943 [2024-07-25 09:41:23.553881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.553906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.554045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.554070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.554257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.554283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.554425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.554458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.554586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.554615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.554763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.554796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.554929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.554955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.555109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.555134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.555311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.555338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.555498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.555528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.555706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.555756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.555885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.555909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.556044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.556075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.556260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.556284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.556414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.556460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.556575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.556618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.556770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.556796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.556892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.556920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.557049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.557076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.557204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.557231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.557384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.557431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.557587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.557614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.557769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.557811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.557917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.557957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.558118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.558145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.558272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.558314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.558468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.558514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.558640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.558666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.558826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.558867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.559061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.559086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.559231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.559273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.559368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.559393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.559552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.559579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.559695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.559721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.559872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.559903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.560042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.560074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.944 qpair failed and we were unable to recover it. 00:26:50.944 [2024-07-25 09:41:23.560189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.944 [2024-07-25 09:41:23.560215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.560377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.560405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.560511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.560555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.560678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.560728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.560858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.561032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.561058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.561206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.561245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.561429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.561457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.561551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.561579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.561714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.561739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.561912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.561941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.562056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.562097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.562236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.562261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.562380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.562406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.562527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.562682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.562707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.562887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.562913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.563060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.563086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.563230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.563271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.563432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.563475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.563606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.563635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.563785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.563829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.563978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.564009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.564133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.564174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.564336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.564387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.564513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.564556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.564679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.564854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.564894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.565024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.565050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.565173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.565198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.565324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.565351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.565531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.565572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.565752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.565776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.565960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.565985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.566138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.566170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.566296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.566337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.566459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.566487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.566614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.566640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.566787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.566833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.566985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.567014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.567153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.567178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.567326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.567352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.567485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.567509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.567652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.567693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.945 [2024-07-25 09:41:23.567837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.945 [2024-07-25 09:41:23.567862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.945 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.568009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.568048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.568181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.568225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.568390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.568417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.568574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.568618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.568796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.568951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.568992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.569114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.569139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.569295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.569321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.569489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.569516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.569654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.569695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.569811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.569858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.570925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.570951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.571123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.571150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.571267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.571297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.571455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.571482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.571602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.571643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.571745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.571769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.571908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.571934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.572095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.572122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.572307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.572331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.572447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.572479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.572607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.572647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.572751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.572797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.572941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.572968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.573129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.573169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.573309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.573352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.573563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.573607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.573716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.573771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.573891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.573930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.574088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.574114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.574240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.574265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.574448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.574496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.574622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.574663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.574831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.574857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.575030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.575055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.575232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.575256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.575420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.575450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.946 qpair failed and we were unable to recover it. 00:26:50.946 [2024-07-25 09:41:23.575591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.946 [2024-07-25 09:41:23.575635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.575793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.575835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.575938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.575978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.576100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.576141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.576328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.576369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.576558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.576598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.576747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.576787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.576950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.577003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.577191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.577216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.577372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.577414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.577532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.577576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.577723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.577753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.577888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.577931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.578072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.578101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.578231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.578258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.578416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.578444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.578569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.578595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.578718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.578761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.578871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.578917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.579082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.579107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.579220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.579245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.579380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.579406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.579549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.579576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.579698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.579738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.579903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.579928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.580061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.580106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.580241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.580287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.580434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.580460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.580609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.580655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.580893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.580918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.581090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.581115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.581325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.581349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.581514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.581542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.581717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.581746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.581961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.582010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.582240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.582283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.582467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.582511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.582646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.582691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.582871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.582913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.583149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.583173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.583310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.583337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.583460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.583504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.583653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.583699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.583815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.583843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.947 qpair failed and we were unable to recover it. 00:26:50.947 [2024-07-25 09:41:23.584009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.947 [2024-07-25 09:41:23.584036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.584133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.584158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.584290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.584317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.584477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.584502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.584738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.584765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.584941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.584967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.585129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.585153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.585402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.585430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.585592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.585643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.585837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.585880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.586053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.586095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.586336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.586385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.586495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.586539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.586663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.586712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.586837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.586884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.587067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.587108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.587255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.587281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.587459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.587509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.587651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.587694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.587797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.588091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.588141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.588367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.588407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.588584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.588791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.588834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.589002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.589044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.589209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.589235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.589447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.589491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.589651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.589675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.589864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.589890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.590034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.590077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.590220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.590259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.590434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.590465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.590704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.590730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.590907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.590938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.591108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.591132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.591334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.591373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.591564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.591619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.591757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.591785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.948 [2024-07-25 09:41:23.591964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.948 [2024-07-25 09:41:23.591993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.948 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.592141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.592166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.592328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.592382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.592561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.592604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.592726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.592768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.592890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.592931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.593114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.593139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.593311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.593336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.593559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.593603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.593719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.593744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.593869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.593895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.594091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.594135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.594286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.594321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.594514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.594557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.594724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.594766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.594885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.594916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.595042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.595067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.595222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.595263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.595433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.595484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.595620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.595662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.595797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.595840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.595996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.596020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.596196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.596223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.596345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.596386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.596503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.596548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.596637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.596664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.596863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.596890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.597103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.597128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.597256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.597296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.597537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.597581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.597809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.597853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.598031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.598075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.598323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.598349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.598479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.598522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.598736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.598779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.598964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.599008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.599219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.599251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.599491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.599534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.599683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.599735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.599929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.599973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.600102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.600126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.600290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.600316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.600451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.600494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.600710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.600753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.600924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.600964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.601154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.949 [2024-07-25 09:41:23.601179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.949 qpair failed and we were unable to recover it. 00:26:50.949 [2024-07-25 09:41:23.601363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.601403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.601552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.601600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.601787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.601828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.602006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.602050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.602290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.602317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.602464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.602507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.602709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.602752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.602911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.602943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.603085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.603131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.603261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.603309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.603487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.603514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.603636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.603678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.603829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.603873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.604052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.604078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.604233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.604259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.604375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.604401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.604495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.604527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.604680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.604722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.604940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.604983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.605165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.605190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.605339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.605402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.605535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.605577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.605727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.605774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.606028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.606071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.606305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.606330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.606483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.606531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.606740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.606788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.606962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.607005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.607197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.607228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.607381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.607414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.607576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.607622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.607856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.607905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.608155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.608182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.608391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.608434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.608571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.608615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.608753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.608793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.609033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.609081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.609246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.609273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.609463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.609508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.609723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.609765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.609957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.610001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.610191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.610236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.610451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.610494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.610676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.610719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.610973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.611015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.611128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.611153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.611329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.950 [2024-07-25 09:41:23.611369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.950 qpair failed and we were unable to recover it. 00:26:50.950 [2024-07-25 09:41:23.611496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.611538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.611744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.612028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.612075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.612291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.612317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.612453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.612480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.612601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.612651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.612870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.612913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.613151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.613194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.613399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.613433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.613602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.613629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.613798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.613838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.614018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.614061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.614220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.614251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.614443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.614470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.614614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.614663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.614830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.614872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.615065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.615110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.615274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.615315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.615481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.615526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.615684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.615728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.615852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.615896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.616028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.616077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.616205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.616230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.616377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.616420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.616551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.616593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.616750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.616780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.616936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.616977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.617141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.617166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.617263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.617288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.617462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.617506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.617644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.617691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.617902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.617943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.618076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.618105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.618304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.618345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.618514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.618541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.618708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.618734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.618864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.618896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.619083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.619110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.619273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.619308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.619473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.619518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.619756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.619798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.619979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.620030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.620224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.620268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.620441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.620485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.620636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.620680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.620807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.620849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.621059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.951 [2024-07-25 09:41:23.621086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.951 qpair failed and we were unable to recover it. 00:26:50.951 [2024-07-25 09:41:23.621308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.621334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.621544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.621599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.621811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.621852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.622051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.622091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.622367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.622422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.622593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.622622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.622746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.622774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.622970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.623008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.623183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.623221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.623454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.623482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.623670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.623712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.623947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.623990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.624102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.624149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.624349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.624386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.624542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.624568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.624700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.624746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.624883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.624926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.625094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.625139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.625335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.625380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.625520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.625546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.625758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.625802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.625963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.626007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.626135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.626162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.626307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.626364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.626526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.626570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.626704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.626753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.626968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.627011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.627179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.627205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.627338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.627374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.627525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.627552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.627799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.627825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.627951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.627978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.628102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.628146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.628279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.628305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.628484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.628511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.628657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.628700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.628880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.628906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.629098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.629124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.629254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.629288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.629513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.629540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.629713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.629755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.952 [2024-07-25 09:41:23.629959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.952 [2024-07-25 09:41:23.630001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.952 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.630125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.630155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.630369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.630395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.630557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.630598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.630740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.630783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.630980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.631039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.631217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.631241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.631438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.631469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.631604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.631629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.631762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.631791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.631920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.631945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.632116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.632155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.632368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.632416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.632572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.632597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.632728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.632753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.632893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.632918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.633058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.633083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.633241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.633266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.633454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.633480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.633606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.633652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.633837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.633861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.634024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.634052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.634195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.634220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:50.953 [2024-07-25 09:41:23.634392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.953 [2024-07-25 09:41:23.634418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:50.953 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.634556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.634603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.634803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.634826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.635010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.635052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.635185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.635209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.635427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.635457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.635673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.635714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.635931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.635972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.636103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.636129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.230 qpair failed and we were unable to recover it. 00:26:51.230 [2024-07-25 09:41:23.636264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.230 [2024-07-25 09:41:23.636289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.636419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.636463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.636615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.636661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.636780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.636822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.637032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.637073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.637290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.637315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.637450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.637502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.637686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.637727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.637933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.637983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.638138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.638176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.638352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.638385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.638555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.638597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.638825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.638867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.639007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.639035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.639244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.639268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.639479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.639522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.639631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.639678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.639881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.639920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.640097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.640138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.640343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.640375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.640558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.640606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.640762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.640803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.640940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.640982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.641142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.641183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.641269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.641293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.641426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.641469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.641662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.641703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.641863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.641903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.642052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.642092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.642307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.642330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.642516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.642558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.642684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.642725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.642859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.642887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.643054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.643099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.643252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.643290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.643453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.231 [2024-07-25 09:41:23.643495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.231 qpair failed and we were unable to recover it. 00:26:51.231 [2024-07-25 09:41:23.643612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.643640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.643833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.643875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.644091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.644133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.644313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.644336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.644584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.644627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.644803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.644845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.645043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.645087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.645262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.645286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.645486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.645529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.645748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.645789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.645926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.645966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.646188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.646211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.646412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.646441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.646599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.646628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.646851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.646892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.647076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.647118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.647350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.647398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.647528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.647570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.647678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.647720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.647917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.647959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.648187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.648228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.648448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.648490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.648722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.648763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.648935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.648976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.649195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.649218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.649381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.649406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.649563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.649602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.649752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.649797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.649926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.649964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.650181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.650204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.650445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.650469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.650581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.650605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.650854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.650877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.651025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.651048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.651178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.651391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.651416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.232 qpair failed and we were unable to recover it. 00:26:51.232 [2024-07-25 09:41:23.651595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.232 [2024-07-25 09:41:23.651638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.651889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.651936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.652090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.652122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.652306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.652329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.652580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.652624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.652813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.652853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.652985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.653008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.653182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.653206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.653396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.653439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.653579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.653607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.653838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.653882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.654042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.654065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.654306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.654329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.654558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.654601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.654764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.654806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.654994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.655036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.655205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.655231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.655405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.655429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.655629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.655671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.655834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.655875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.656008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.656035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.656274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.656297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.656451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.656494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.656683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.656723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.656900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.656942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.657215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.657237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.657381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.657405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.657609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.657649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.657799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.657842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.658005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.658045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.658270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.658293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.658454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.658496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.658704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.658745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.658910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.658952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.659170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.659193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.659389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.659414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.233 qpair failed and we were unable to recover it. 00:26:51.233 [2024-07-25 09:41:23.659624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.233 [2024-07-25 09:41:23.659665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.659901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.659943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.660144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.660188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.660289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.660338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.660487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.660530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.660717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.660758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.660912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.660963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.661202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.661242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.661401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.661424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.661623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.661664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.661839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.661881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.662069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.662110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.662291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.662313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.662507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.662549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.662685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.662718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.662893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.662943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.663120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.663143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.663383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.663407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.663579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.663621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.663814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.663854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.664055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.664096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.664261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.664284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.664469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.664511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.664706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.664748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.664906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.664944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.665161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.665205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.665424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.665449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.665653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.665681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.665905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.665933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.666155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.666183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.666410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.666435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.666664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.666691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.666923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.666969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.667184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.667211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.667378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.667417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.667617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.234 [2024-07-25 09:41:23.667657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.234 qpair failed and we were unable to recover it. 00:26:51.234 [2024-07-25 09:41:23.667874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.667919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.668159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.668186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.668377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.668417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.668610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.668649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.668804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.668855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.669084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.669112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.669315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.669342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.669582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.669606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.669849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.669872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.670114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.670142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.670269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.670297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.670484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.670509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.670666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.670699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.670864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.670892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.671045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.671072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.671246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.671274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.235 [2024-07-25 09:41:23.671419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.235 [2024-07-25 09:41:23.671442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.235 qpair failed and we were unable to recover it. 00:26:51.236 [2024-07-25 09:41:23.671695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.236 [2024-07-25 09:41:23.671723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.236 qpair failed and we were unable to recover it. 00:26:51.236 [2024-07-25 09:41:23.671911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.236 [2024-07-25 09:41:23.671952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.236 qpair failed and we were unable to recover it. 00:26:51.236 [2024-07-25 09:41:23.672170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.236 [2024-07-25 09:41:23.672198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.236 qpair failed and we were unable to recover it. 00:26:51.236 [2024-07-25 09:41:23.672444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.236 [2024-07-25 09:41:23.672470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.236 qpair failed and we were unable to recover it. 00:26:51.236 [2024-07-25 09:41:23.672659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.236 [2024-07-25 09:41:23.672699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.236 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.672897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.672924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.673110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.673137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.673311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.673338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.673521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.673553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.673722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.673749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.673903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.673931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.674120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.674148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.674311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.674338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.674551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.674575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.674734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.674762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.674951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.674974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.675143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.675170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.675322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.675350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.675529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.675554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.675752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.675789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.675940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.675967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.676087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.676120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.676278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.676305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.676441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.676465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.676563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.676594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.676744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.676772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.676953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.676981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.677207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.677235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.677512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.677536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.677684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.677712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.677871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.677899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.678144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.678194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.678429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.678453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.678625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.678667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.678825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.678853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.679084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.679138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.679312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.679339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.679477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.679501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.679688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.679716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.237 qpair failed and we were unable to recover it. 00:26:51.237 [2024-07-25 09:41:23.679954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.237 [2024-07-25 09:41:23.679976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.680123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.680150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.680277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.680309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.680467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.680491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.680695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.680731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.680891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.680929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.681061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.681088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.681258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.681285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.681400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.681424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.681611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.681634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.681850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.681878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.682032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.682060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.682252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.682279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.682463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.682487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.682697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.682724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.682950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.682978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.683130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.683158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.683346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.683385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.683569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.683593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.683756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.683783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.683911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.683948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.684148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.684176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.684335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.684369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.684584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.684611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.684798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.684822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.685060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.685087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.685257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.685284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.685508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.685536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.685769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.685791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.685966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.685998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.686232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.686259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.686476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.686505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.686694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.686716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.686919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.687121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.687148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.687345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.687380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.687569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.687592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.238 [2024-07-25 09:41:23.687754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.238 [2024-07-25 09:41:23.687776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.238 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.688015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.688042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.688200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.688227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.688382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.688408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.688564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.688587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.688781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.688808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.689009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.689036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.689255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.689279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.689518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.689546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.689706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.689734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.689853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.689880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.690080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.690102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.690281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.690308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.690506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.690534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.690721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.690748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.690898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.690920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.691112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.691139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.691320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.691347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.691605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.691644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.691849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.691871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.692062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.692090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.692226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.692264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.692410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.692438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.692589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.692626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.692787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.692815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.692969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.692996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.693179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.693206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.693334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.693378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.693566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.693593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.693769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.693796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.694020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.694047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.694235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.694258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.694498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.694551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.694764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.694792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.694974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.695021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.695217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.695240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.695481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.239 [2024-07-25 09:41:23.695508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.239 qpair failed and we were unable to recover it. 00:26:51.239 [2024-07-25 09:41:23.695675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.695702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.695923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.695950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.696192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.696214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.696449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.696477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.696646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.696673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.696894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.696922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.697142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.697164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.697349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.697383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.697625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.697652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.697861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.697889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.698084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.698106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.698285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.698312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.698527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.698555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.698740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.698767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.698932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.698954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.699163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.699191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.699390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.699431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.699633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.699674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.699866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.699889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.700077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.700105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.700331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.700366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.700589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.700616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.700793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.700816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.701034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.701062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.701286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.701313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.701523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.701547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.701713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.701736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.701955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.701983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.702207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.702235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.702459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.702488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.702705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.702727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.702934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.702962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.703119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.703146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.703385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.703413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.703612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.703650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.703876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.703908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.704130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.704157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.240 [2024-07-25 09:41:23.704370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.240 [2024-07-25 09:41:23.704398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.240 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.704542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.704566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.704720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.704757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.704933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.704960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.705151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.705179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.705368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.705406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.705598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.705626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.705817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.705844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.706014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.706041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.706257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.706279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.706509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.706537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.706765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.706793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.706978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.707006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.707221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.707243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.707430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.707458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.707642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.707670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.707897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.707924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.708151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.708174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.708333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.708367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.708599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.708627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.708820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.708848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.709025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.709047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.709287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.709314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.709535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.709559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.709779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.709807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.710036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.710059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.710206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.710233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.710444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.710473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.710651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.710679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.710856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.710878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.711099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.241 [2024-07-25 09:41:23.711126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.241 qpair failed and we were unable to recover it. 00:26:51.241 [2024-07-25 09:41:23.711410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.711439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.711664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.711691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.711864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.711886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.712069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.712097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.712313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.712340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.712540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.712569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.712733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.712755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.712922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.712954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.713113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.713141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.713349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.713383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.713567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.713598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.713836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.713863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.714054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.714081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.714187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.714215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.714334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.714364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.714505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.714528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.714760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.714787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.714997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.715024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.715203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.715225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.715408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.715437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.715596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.715624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.715777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.715805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.716030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.716053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.716289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.716317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.716530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.716558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.716778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.716805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.717025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.717048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.717268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.717296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.717533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.717557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.717783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.717810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.717990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.718012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.718215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.718242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.718466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.718494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.718682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.718709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.718883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.718905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.719098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.719126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.719341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.719388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.719531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.242 [2024-07-25 09:41:23.719559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.242 qpair failed and we were unable to recover it. 00:26:51.242 [2024-07-25 09:41:23.719730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.719752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.719954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.719981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.720198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.720225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.720374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.720402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.720578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.720601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.720725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.720766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.720928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.720955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.721154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.721181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.721378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.721417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.721610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.721642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.721866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.721893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.722051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.722078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.722259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.722281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.722479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.722507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.722740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.722767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.722880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.722908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.723131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.723154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.723391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.723419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.723580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.723607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.723790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.723818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.724033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.724055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.724286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.724313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.724561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.724588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.724707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.724734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.724952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.724974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.725178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.725206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.725323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.725350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.725513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.725554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.725746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.725768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.725985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.726012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.726186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.726213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.726450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.726478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.726656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.726678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.243 [2024-07-25 09:41:23.726884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.243 [2024-07-25 09:41:23.726911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.243 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.727148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.727175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.727351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.727415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.727652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.727675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.727869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.727896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.728077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.728104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.728282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.728310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.728534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.728558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.728778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.728805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.729015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.729042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.729216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.729244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.729470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.729495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.729687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.729714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.729896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.729924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.730076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.730103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.730321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.730344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.730574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.730606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.730808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.730835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.731010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.731038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.731273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.731295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.731425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.731454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.731648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.731675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.731837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.731864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.732031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.732054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.732276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.732304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.732524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.732552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.732769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.732797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.733030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.733053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.733278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.733306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.733494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.733518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.733735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.733763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.733962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.733985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.734146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.734168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.734392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.734420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.734650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.734677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.734860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.734883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.244 [2024-07-25 09:41:23.735073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.244 [2024-07-25 09:41:23.735101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.244 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.735279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.735306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.735532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.735556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.735728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.735750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.735977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.736004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.736235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.736262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.736450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.736475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.736666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.736689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.736872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.736910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.737036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.737063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.737239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.737266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.737425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.737450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.737633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.737660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.737886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.737914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.738132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.738159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.738296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.738319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.738546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.738570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.738798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.738826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.739038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.739065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.739264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.739286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.739506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.739538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.739756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.739784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.739938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.739965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.740175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.740198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.740394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.740422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.740606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.740633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.740801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.740828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.741021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.741043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.741208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.741247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.741430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.741458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.741626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.741653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.741878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.741901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.742093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.742129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.742331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.742364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.742518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.742546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.742766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.742789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.742902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.245 [2024-07-25 09:41:23.742930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.245 qpair failed and we were unable to recover it. 00:26:51.245 [2024-07-25 09:41:23.743151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.743179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.743402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.743429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.743566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.743589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.743757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.743798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.744010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.744037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.744252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.744279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.744498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.744522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.744690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.744717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.744954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.744982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.745137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.745164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.745404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.745428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.745661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.745689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.745868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.745896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.746123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.746150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.746326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.746353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.746582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.746605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.746793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.746820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.746984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.747011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.747136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.747163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.747406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.747430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.747614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.747653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.747866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.747893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.748111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.748133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.748383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.748416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.748606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.748634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.748774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.748802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.749010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.749032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.749248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.749275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.749484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.749512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.749688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.749715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.246 qpair failed and we were unable to recover it. 00:26:51.246 [2024-07-25 09:41:23.749931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.246 [2024-07-25 09:41:23.749953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.750195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.750222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.750427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.750455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.750629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.750656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.750876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.750899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.751059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.751087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.751272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.751299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.751521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.751545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.751769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.751791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.751983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.752021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.752231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.752258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.752468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.752496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.752713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.752736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.752959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.752986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.753133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.753160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.753341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.753375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.753602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.753626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.753778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.753800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.753994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.754022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.754234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.754261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.754495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.754519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.754737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.754764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.754984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.755011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.755189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.755216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.755387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.755410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.755601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.755628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.755846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.755873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.756108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.756135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.756308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.756330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.756484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.756508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.756736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.756764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.756982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.757010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.757208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.757230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.757430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.757464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.757693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.757720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.757947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.757975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.758118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.247 [2024-07-25 09:41:23.758140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.247 qpair failed and we were unable to recover it. 00:26:51.247 [2024-07-25 09:41:23.758332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.758366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.758586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.758614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.758834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.758861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.759079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.759102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.759322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.759350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.759590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.759618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.759845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.759872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.760055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.760077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.760272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.760299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.760536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.760564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.760799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.760827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.761047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.761069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.761307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.761334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.761527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.761554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.761764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.761791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.761969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.762002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.762246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.762285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.762522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.762563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.762733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.762772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.763020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.763048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.763221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.763246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.763460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.763496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.763711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.763762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.764036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.764068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.764327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.764371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.764577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.764612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.764862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.764902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.765129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.765164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.765390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.765426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.765666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.765700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.765917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.765957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.766213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.766261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.766538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.766571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.766792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.766832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.767009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.767047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.767313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.248 [2024-07-25 09:41:23.767345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.248 qpair failed and we were unable to recover it. 00:26:51.248 [2024-07-25 09:41:23.767601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.767656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.767906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.767940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.768174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.768208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.768473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.768513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.768707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.768735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.768924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.768970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.769209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.769253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.769375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.769403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.769587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.769631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.769853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.769881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.770105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.770130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.770345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.770382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.770564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.770813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.770839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.771044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.771087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.771317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.771343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.771579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.771606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.771816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.771860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.772077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.772124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.772283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.772309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.772534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.772561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.772774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.772817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.773004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.773051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.773280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.773303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.773514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.773541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.773763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.773806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.774061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.774104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.774346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.774393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.774593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.774620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.774846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.774886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.775098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.775139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.775411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.775460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.775724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.775751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.775978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.776004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.249 [2024-07-25 09:41:23.776237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.249 [2024-07-25 09:41:23.776272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.249 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.776515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.776543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.776783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.776810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.776999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.777045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.777218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.777261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.777457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.777484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.777662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.777716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.777951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.777993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.778240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.778283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.778472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.778500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.778677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.778721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.778890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.778933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.779139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.779182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.779385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.779413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.779662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.779703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.779900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.779946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.780169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.780211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.780475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.780505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.780711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.780759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.780967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.781017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.781145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.781171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.781346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.781394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.781608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.781638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.781858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.781901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.782051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.782096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.782307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.782333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.782579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.782605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.782761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.782806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.783058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.783085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.783271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.783297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.783518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.783545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.783762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.783806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.784000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.784046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.784294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.784321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.784513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.784540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.784760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.784790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.785028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.785071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.250 [2024-07-25 09:41:23.785261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.250 [2024-07-25 09:41:23.785286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.250 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.785515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.785542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.785766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.785812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.786049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.786093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.786323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.786353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.786612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.786658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.786887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.786930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.787096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.787139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.787353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.787409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.787586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.787623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.787840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.787883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.788074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.788117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.788346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.788384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.788611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.788639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.788852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.788896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.789065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.789107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.789327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.789354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.789583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.789610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.789767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.789812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.790014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.790056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.790235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.790261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.790463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.790506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.790694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.790737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.790956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.791000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.791211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.791256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.791496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.791540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.791737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.791779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.791992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.792035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.792226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.792252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.792437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.792481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.792675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.251 [2024-07-25 09:41:23.792720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.251 qpair failed and we were unable to recover it. 00:26:51.251 [2024-07-25 09:41:23.792907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.792950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.793130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.793172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.793398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.793425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.793594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.793642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.793827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.793874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.794115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.794159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.794346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.794389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.794601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.794645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.794865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.794910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.795110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.795151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.795348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.795394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.795630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.795672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.795907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.795954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.796178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.796220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.796455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.796482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.796697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.796740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.796950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.796991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.797227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.797273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.797502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.797532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.797772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.797815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.798019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.798046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.798253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.798279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.798501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.798528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.798756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.798798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.799023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.799065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.799287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.799314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.799526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.799553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.799745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.799775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.800018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.800061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.800283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.800309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.800470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.800498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.800658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.800701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.800943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.800986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.801170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.801214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.801409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.801447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.801689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.801732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.801959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.252 [2024-07-25 09:41:23.802004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.252 qpair failed and we were unable to recover it. 00:26:51.252 [2024-07-25 09:41:23.802191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.802216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.802397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.802444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.802695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.802738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.802945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.802988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.803183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.803209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.803458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.803505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.803721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.803765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.804006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.804047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.804287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.804317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.804530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.804578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.804803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.804846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.805066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.805115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.805286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.805311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.805494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.805521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.805738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.805782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.805964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.806008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.806232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.806258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.806436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.806480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.806648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.806695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.806882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.806924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.807120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.807162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.807385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.807413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.807612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.807659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.807839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.807883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.808106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.808149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.808332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.808368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.808540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.808566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.808832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.808877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.809109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.809139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.809282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.809335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.809478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.809503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.809668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.809695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.809850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.253 [2024-07-25 09:41:23.809876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.253 qpair failed and we were unable to recover it. 00:26:51.253 [2024-07-25 09:41:23.810096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.810124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.810350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.810386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.810584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.810609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.810825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.810849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.811037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.811062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.811235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.811259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.811438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.811464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.811669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.811694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.811915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.811939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.812107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.812131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.812298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.812326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.812511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.812536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.812746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.812770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.812946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.812970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.813200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.813249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.813474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.813513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.813731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.813755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.813961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.813985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.814166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.814193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.814424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.814448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.814628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.814652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.814776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.814800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.814954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.814979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.815185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.815209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.815388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.815412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.815585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.815609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.815783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.815808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.816018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.816042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.816222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.816246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.816458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.816488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.816721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.816745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.816899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.816925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.817070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.817096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.817312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.254 qpair failed and we were unable to recover it. 00:26:51.254 [2024-07-25 09:41:23.817540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.254 [2024-07-25 09:41:23.817564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.817794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.817822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.817979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.818027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.818233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.818261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.818476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.818500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.818721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.818749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.818976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.819022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.819215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.819242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.819433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.819459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.819613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.819651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.819883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.819932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.820113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.820140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.820354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.820387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.820604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.820643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.820875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.820925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.821151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.821178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.821387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.821411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.821654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.821678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.821930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.821979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.822167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.822195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.822389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.822413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.822573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.822595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.822817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.822870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.823075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.823102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.823337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.823371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.823564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.823587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.823776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.823799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.823974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.824002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.824234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.824288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.824500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.824523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.824714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.824759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.824969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.824995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.825227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.825274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.825449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.825472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.825607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.825649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.825831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.825858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.826119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.826141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.255 [2024-07-25 09:41:23.826371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.255 [2024-07-25 09:41:23.826412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.255 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.826657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.826719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.826934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.826962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.827160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.827210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.827427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.827451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.827636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.827676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.827863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.827891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.828066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.828087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.828230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.828270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.828495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.828524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.828659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.828686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.828899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.828921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.829150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.829178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.829373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.829400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.829617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.829833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.829855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.830075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.830103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.830306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.830334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.830566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.830594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.830811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.830834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.831080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.831108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.831340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.831373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.831531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.831558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.831736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.831773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.831936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.831962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.832191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.832239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.832451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.832483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.832700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.832723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.832942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.832969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.833202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.833251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.833433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.833460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.833677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.833700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.256 [2024-07-25 09:41:23.833888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.256 [2024-07-25 09:41:23.833915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.256 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.834100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.834148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.834371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.834398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.834551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.834574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.834790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.834818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.835049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.835097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.835321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.835349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.835609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.835633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.835876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.835903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.836081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.836129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.836344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.836381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.836544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.836567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.836734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.836760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.836950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.837000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.837210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.837258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.837476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.837500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.837696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.837724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.837947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.837994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.838211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.838239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.838424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.838462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.838659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.838912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.838966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.839189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.839217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.839439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.839462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.839653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.839681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.839911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.839961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.840099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.840127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.840344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.840387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.840558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.840581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.840814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.840861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.841080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.841108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.841281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.841303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.841530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.841553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.841750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.841801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.842031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.842059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.842296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.842319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.842561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.842585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.257 [2024-07-25 09:41:23.842820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.257 [2024-07-25 09:41:23.842868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.257 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.843080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.843107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.843320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.843343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.843536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.843560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.843801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.843851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.844075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.844103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.844326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.844349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.844596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.844624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.844817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.844866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.845044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.845071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.845258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.845280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.845508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.845536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.845772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.845820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.845980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.846008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.846186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.846215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.846440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.846471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.846691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.846719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.846905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.846932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.847142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.847165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.847395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.847423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.847633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.847660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.847822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.847849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.848040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.848063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.848290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.848317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.848547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.848575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.848756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.848788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.848972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.848995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.849221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.849249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.849437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.849465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.849635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.849662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.849891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.849914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.850144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.850171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.850411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.850435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.850661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.850688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.850908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.850931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.851128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.258 [2024-07-25 09:41:23.851155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.258 qpair failed and we were unable to recover it. 00:26:51.258 [2024-07-25 09:41:23.851385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.851414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.851630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.851657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.851882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.851905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.852137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.852165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.852347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.852381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.852599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.852626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.852847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.852870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.853099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.853126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.853348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.853383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.853558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.853585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.853818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.853840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.854044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.854072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.854243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.854271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.854500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.854528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.854677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.854698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.854894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.854920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.855099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.855148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.855374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.855401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.855613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.855637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.855871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.855898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.856121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.856172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.856346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.856380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.856567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.856590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.856736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.856764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.856994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.857043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.857253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.857281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.857499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.857522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.857732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.857758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.857936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.857983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.858160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.858187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.858432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.858456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.858613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.858652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.858834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.858889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.859012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.859040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.859257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.859284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.859501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.859526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.259 [2024-07-25 09:41:23.859724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.259 [2024-07-25 09:41:23.859780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.259 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.859913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.859939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.860156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.860177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.860371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.860411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.860603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.860626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.860853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.860880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.861069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.861092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.861325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.861352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.861661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.861689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.861835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.861862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.862090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.862112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.862283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.862309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.862534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.862562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.862746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.862772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.862987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.863008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.863240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.863267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.863448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.863476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.863658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.863685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.863859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.863881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.864113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.864141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.864301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.864327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.864478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.864511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.864687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.864709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.864909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.864936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.865160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.865207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.865390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.865417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.865585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.865607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.865781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.865808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.866026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.866073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.866247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.866275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.866459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.866481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.866706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.866733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.866923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.866971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.867136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.867163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.867401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.867426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.260 qpair failed and we were unable to recover it. 00:26:51.260 [2024-07-25 09:41:23.867621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.260 [2024-07-25 09:41:23.867660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.867902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.867951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.868121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.868148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.868370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.868411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.868618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.868659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.868842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.868892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.869068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.869095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.869280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.869302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.869493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.869516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.869705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.869754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.869939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.869966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.870193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.870215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.870448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.870476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.870713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.870763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.870985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.871012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.871233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.871255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.871447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.871476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.871716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.871765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.871980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.872007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.872170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.872192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.872388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.872416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.872640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.872692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.872830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.872856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.873079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.873102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.873319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.873346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.873573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.873600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.873816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.873843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.874035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.261 [2024-07-25 09:41:23.874057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.261 qpair failed and we were unable to recover it. 00:26:51.261 [2024-07-25 09:41:23.874242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.874270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.874434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.874461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.874621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.874648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.874826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.874849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.875083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.875110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.875279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.875306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.875534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.875562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.875787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.875809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.876001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.876028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.876235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.876261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.876450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.876478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.876605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.876629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.876856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.876883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.877067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.877116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.877306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.877333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.877521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.877544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.877762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.877789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.878019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.878064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.878246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.878273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.878483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.878507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.878735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.878762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.878942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.878996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.879220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.879247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.879471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.879494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.879688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.879715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.879938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.879990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.880178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.880210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.880430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.880454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.880687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.880715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.880939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.880988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.881144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.881170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.881351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.881379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.881566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.881594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.881822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.881870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.262 [2024-07-25 09:41:23.882054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.262 [2024-07-25 09:41:23.882081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.262 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.882249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.882271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.882508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.882536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.882758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.882806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.882971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.882999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.883128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.883164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.883298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.883322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.883456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.883482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.883633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.883659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.883778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.883801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.883965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.884101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.884258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.884429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.884563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.884716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.884871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.884898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.885017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.885040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.885183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.885221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.885366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.885394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.885512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.885539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.885752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.885775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.885993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.886020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.886199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.886232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.886415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.886443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.886569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.886592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.886829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.886857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.887072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.887100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.887323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.887350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.887504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.887528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.887674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.887713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.887878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.887927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.888066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.888092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.888269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.888295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.888428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.888453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.888585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.888609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.888795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.888822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.889004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.889026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-07-25 09:41:23.889172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.263 [2024-07-25 09:41:23.889198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.889329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.889362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.889484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.889511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.889664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.889687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.889853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.889875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.890944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.890967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.891094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.891116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.891293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.891320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.891496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.891524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.891641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.891679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.891798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.891832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.891962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.891989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.892120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.892146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.892274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.892312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.892472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.892514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.892607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.892634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.892801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.892833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.893010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.893032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.893182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.893208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.893373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.893413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.893550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.893578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.893754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.893776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.893911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.893950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.894048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.894075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.894201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.894228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.894351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.894399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.894489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.894513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.894671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.894697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.894848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.894875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.895003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.895027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-07-25 09:41:23.895185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.264 [2024-07-25 09:41:23.895224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.895403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.895427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.895579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.895602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.895719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.895742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.895911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.895934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.896068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.896094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.896269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.896297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.896463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.896488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.896583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.896615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.896767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.896794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.896924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.896951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.897120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.897156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.897330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.897365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.897502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.897530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.897662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.897689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.897831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.897854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.897996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.898018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.898191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.898347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.898383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.898508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.898531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.898682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.898704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.898860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.898887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.899023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.899050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.899187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.899210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.899348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.899507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.899534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.899690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.899718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.899836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.899863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.900048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.900088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.900229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.900256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.900405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.900434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.900571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.900594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.900785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.900807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.900962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.900989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.901098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.901126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-07-25 09:41:23.901276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.265 [2024-07-25 09:41:23.901302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.901429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.901453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.901565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.901589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.901734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.901761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.901893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.901931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.902133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.902160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.902320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.902348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.902471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.902499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.902659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.902682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.902802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.902825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.902977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.903125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.903256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.903460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.903647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.903820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.903976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.903999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.904157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.904195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.904367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.904394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.904525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.904556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.904660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.904684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.904821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.904843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.904965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.904992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.905155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.905182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.905286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.905308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.905422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.905446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.905585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.905612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.905764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.905791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.905902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.905925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.906090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.906112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.906274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.906301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.266 qpair failed and we were unable to recover it. 00:26:51.266 [2024-07-25 09:41:23.906433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.266 [2024-07-25 09:41:23.906461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.906592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.906615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.906754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.906778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.906958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.907017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.907167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.907194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.907365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.907393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.907530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.907554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.907695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.907722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.907872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.907899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.908070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.908092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.908197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.908220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.908349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.908410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.908510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.908534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.908667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.908691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.908849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.908890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.909945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.909968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.910098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.910121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.910239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.910266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.910416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.910445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.910571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.910610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.910752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.910790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.910942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.910969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.911120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.911147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.911262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.911288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.911394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.911418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.911542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.911565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.911733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.911760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.911920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.911942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.912072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.267 [2024-07-25 09:41:23.912112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.267 qpair failed and we were unable to recover it. 00:26:51.267 [2024-07-25 09:41:23.912234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.912261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.912370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.912398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.912529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.912552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.912705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.912742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.912864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.912891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.913917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.913940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.914045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.914072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.914218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.914244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.914371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.914412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.914539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.914577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.914702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.914730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.914845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.914872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.915020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.915043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.915185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.915208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.915349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.915383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.915536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.915563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.915694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.915732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.915867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.915890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.916035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.916062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.916210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.916237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.916405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.916430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.916592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.916615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.916757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.916784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.916906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.916933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.917050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.917073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.917211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.917234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.917352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.917402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.917537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.268 [2024-07-25 09:41:23.917561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.268 qpair failed and we were unable to recover it. 00:26:51.268 [2024-07-25 09:41:23.917692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.917729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.917808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.917832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.918039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.918213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.918410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.918590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.918743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.918889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.918988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.919011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.919141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.919164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.919291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.919318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.919470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.919498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.919664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.919687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.919823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.919863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.920043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.920195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.920366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.920524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.920705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.920827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.920983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.921096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.921261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.921427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.921584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.921707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.921847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.921874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.922025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.922052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.922221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.922247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.922398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.922421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.922580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.922608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.922704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.922732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.922854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.922877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.923055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.923092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.923230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.923258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.923383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.923426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.923523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.923547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.923645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.269 [2024-07-25 09:41:23.923668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.269 qpair failed and we were unable to recover it. 00:26:51.269 [2024-07-25 09:41:23.923837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.923864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.923978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.924005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.924158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.924181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.924298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.924321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.924505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.924532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.924684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.924711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.924845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.924882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.925057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.925237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.925365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.925505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.925643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.925840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.925999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.926026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.926162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.926199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.926327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.926375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.926515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.926542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.926666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.926693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.926833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.926870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.927022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.927063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.927223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.927250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.927382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.927411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.927546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.927570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.927704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.927727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.927892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.927919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.928072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.928192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.928349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.928526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.928702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.928846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.928983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.929006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.929143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.929170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.929317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.929344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.270 qpair failed and we were unable to recover it. 00:26:51.270 [2024-07-25 09:41:23.929478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.270 [2024-07-25 09:41:23.929516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.929589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.929612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.929778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.929805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.929955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.929982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.930119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.930155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.930298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.930337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.930491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.930519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.930667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.930694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.930813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.930836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.930971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.930994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.931116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.931143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.931275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.931302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.931446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.931470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.931595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.931618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.931753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.931792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.931943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.931970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.932086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.932109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.932270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.932293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.932433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.932461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.932611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.932638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.932804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.932826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.932954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.932994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.933142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.933169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.933318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.933345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.933470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.933513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.933662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.933684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.933847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.933875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.934022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.934049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.934203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.934240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.934376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.934418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.934544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.934572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.934695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.271 [2024-07-25 09:41:23.934723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.271 qpair failed and we were unable to recover it. 00:26:51.271 [2024-07-25 09:41:23.934851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.934874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.934983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.935143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.935296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.935466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.935661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.935821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.935961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.935989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.936116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.936139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.936272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.936295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.936425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.936450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.936551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.936575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.936663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.936687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.936847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.936870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.937036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.937063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.937221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.937248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.937411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.937434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.937550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.937572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.937713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.937740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.937891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.937917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.938113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.938271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.938409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.938528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.938696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.938840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.938988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.939130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.939265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.939443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.939605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.939757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.939907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.939930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.940046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.940072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.940199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.940222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.940371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.940398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.940552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.272 [2024-07-25 09:41:23.940590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.272 qpair failed and we were unable to recover it. 00:26:51.272 [2024-07-25 09:41:23.940716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.940763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.940897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.940937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.941114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.941146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.941297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.941322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.941426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.941451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.941571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.941596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.941712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.941878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.941900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.942071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.942107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.942241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.942268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.942426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.942462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.942587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.942626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.942805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.942997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.943148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.943331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.943485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.943650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.943775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.943905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.943928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.944066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.944104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.944247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.944276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.944424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.273 [2024-07-25 09:41:23.944464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.273 qpair failed and we were unable to recover it. 00:26:51.273 [2024-07-25 09:41:23.944662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.944693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.944864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.944892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.945012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.945040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.945189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.945216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.945343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.945388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.945531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.945572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.945729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.945779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.945903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.945930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.946050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.946074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.946223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.946247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.946382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.946410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.946563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.946590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.946711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.946735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.946876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.946900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.947075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.947102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.947224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.947252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.947376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.947402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.947545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.947570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.947722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.947749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.947865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.947892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.948023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.948047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.948152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.948175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.948318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.948345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.948485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.948509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.948672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.948695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.948844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.948871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.949021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.949048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.949172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.949199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.949370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.949394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.949526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.949567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.949660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-07-25 09:41:23.949687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.559 qpair failed and we were unable to recover it. 00:26:51.559 [2024-07-25 09:41:23.949840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.949867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.950035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.950057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.950233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.950260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.950349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.950386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.950533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.950560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.950727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.950749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.950916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.950943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.951036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.951064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.951216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.951243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.951369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.951408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.951524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.951551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.951682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.951709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.951867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.951894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.952072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.952212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.952385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.952509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.952674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.952876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.952973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.953000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.953149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.953176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.953313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.953336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.953479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.953502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.953648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.953675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.953826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.953854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.954936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.954975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.955095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.955123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.955252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.955290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.955430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.955468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.955596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.955623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.955738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.955765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.955924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.560 [2024-07-25 09:41:23.955950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.560 qpair failed and we were unable to recover it. 00:26:51.560 [2024-07-25 09:41:23.956067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.956090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.956221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.956248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.956405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.956433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.956567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.956590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.956695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.956718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.956854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.956893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.957972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.957995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.958125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.958153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.958301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.958328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.958522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.958546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.958662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.958705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.958832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.958859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.958974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.959100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.959287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.959420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.959573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.959763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.959917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.959944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.960038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.960065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.960182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.960209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.960384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.960409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.960538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.960579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.960702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.960729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.960878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.960905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.961972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.561 [2024-07-25 09:41:23.961999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.561 qpair failed and we were unable to recover it. 00:26:51.561 [2024-07-25 09:41:23.962082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.962109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.962268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.962292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.962452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.962483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.962601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.962625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.962787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.962814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.962939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.962963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.963097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.963121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.963220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.963247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.963371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.963400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.963555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.963579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.963696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.963735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.963886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.963913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.964048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.964075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.964226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.964264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.964428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.964471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.964625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.964652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.964779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.964806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.964938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.964961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.965051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.965075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.965214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.965252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.965373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.965401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.965513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.965536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.965668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.965691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.965866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.965893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.966041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.966220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.966418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.966568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.966722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.966846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.966981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.967142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.967259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.967431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.967611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.967748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.967858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.967886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.968014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.562 [2024-07-25 09:41:23.968052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.562 qpair failed and we were unable to recover it. 00:26:51.562 [2024-07-25 09:41:23.968216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.968257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.968413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.968438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.968561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.968586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.968702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.968725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.968906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.968934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.969097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.969125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.969274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.969301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.969423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.969448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.969605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.969628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.969791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.969818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.969947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.969974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.970126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.970163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.970324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.970351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.970492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.970520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.970643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.970670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.970842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.970864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.970988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.971029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.971186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.971213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.971300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.971327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.971497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.971521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.971687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.971714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.971840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.971867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.972018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.972046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.972160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.972183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.972306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.972329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.972503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.972531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.972680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.972707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.972880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.972902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.973026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.973065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.973190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.973217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.973370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.973398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.563 qpair failed and we were unable to recover it. 00:26:51.563 [2024-07-25 09:41:23.973564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.563 [2024-07-25 09:41:23.973587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.973746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.973777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.973915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.973968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.974084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.974111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.974219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.974241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.974415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.974439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.974600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.974624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.974757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.974783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.974917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.974940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.975050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.975073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.975250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.975277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.975437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.975462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.975606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.975649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.975755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.975795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.975922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.975949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.976081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.976108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.976214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.976237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.976374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.976413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.976550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.976577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.976708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.976736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.976909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.976931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.977099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.977126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.977252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.977279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.977398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.977426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.977546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.977569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.977702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.977725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.977899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.977927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.978072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.978099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.978220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.978243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.978376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.978400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.978533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.978557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.978739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.978766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.978913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.978935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.979063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.979086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.979224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.979252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.979380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.979408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.979509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.979533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.979626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.979649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.564 [2024-07-25 09:41:23.979827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.564 [2024-07-25 09:41:23.979853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.564 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.979980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.980131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.980264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.980426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.980572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.980735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.980876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.980899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.981017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.981044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.981195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.981222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.981372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.981395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.981535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.981573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.981733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.981782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.981909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.981937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.982094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.982131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.982225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.982248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.982384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.982412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.982536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.982564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.982737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.982760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.982928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.982955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.983080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.983107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.983233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.983260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.983410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.983434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.983570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.983593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.983752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.983779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.983904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.983931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.984088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.984126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.984280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.984307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.984443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.984467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.984647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.984674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.984800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.984838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.984971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.984998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.985161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.985188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.985336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.985370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.985496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.985535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.985643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.985666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.985781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.985808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.985969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.985996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.986112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.565 [2024-07-25 09:41:23.986135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.565 qpair failed and we were unable to recover it. 00:26:51.565 [2024-07-25 09:41:23.986241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.986263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.986414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.986442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.986534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.986561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.986718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.986741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.986889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.986928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.987080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.987261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.987415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.987559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.987732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.987883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.987995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.988174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.988337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.988490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.988628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.988808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.988959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.988986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.989108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.989135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.989249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.989272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.989404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.989428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.989541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.989565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.989729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.989756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.989851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.989874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.990018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.990041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.990175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.990214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.990302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.990329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.990495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.990519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.990686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.990713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.990855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.990896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.991976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.991999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.992130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.992158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.566 qpair failed and we were unable to recover it. 00:26:51.566 [2024-07-25 09:41:23.992306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.566 [2024-07-25 09:41:23.992334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.992476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.992514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.992655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.992692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.992821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.992848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.992994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.993145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.993305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.993468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.993615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.993750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.993935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.993958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.994101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.994129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.994280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.994465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.994489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.994603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.994626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.994763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.994790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.994923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.994949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.995072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.995095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.995262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.995301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.995399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.995424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.995543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.995568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.995695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.995736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.995841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.995886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.996921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.996944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.997088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.997111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.997272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.997299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.997428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.997456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.997589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.997612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.997744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.997767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.997896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.997923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.998079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.998106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.998276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.998299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.567 [2024-07-25 09:41:23.998423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.567 [2024-07-25 09:41:23.998447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.567 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.998603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.998627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.998748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.998775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.998898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.998921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.999068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.999234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.999396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.999601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.999717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:23.999878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:23.999995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.000139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.000290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.000456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.000648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.000786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.000913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.000936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.001126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.001249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.001383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.001568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.001737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.001882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.001998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.002020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.002140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.002162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.002291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.002322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.002472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.002500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.002657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.002681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.002810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.002849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.002974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.003001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.003151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.003178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.003335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.003371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.003507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.003532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.003655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.003682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.568 qpair failed and we were unable to recover it. 00:26:51.568 [2024-07-25 09:41:24.003828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.568 [2024-07-25 09:41:24.003855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.003987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.004143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.004290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.004452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.004616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.004810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.004933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.004960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.005113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.005140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.005274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.005311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.005403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.005427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.005558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.005582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.005719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.005746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.005856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.005879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.006962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.006989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.007102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.007125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.007252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.007275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.007405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.007433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.007554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.007581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.007713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.007736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.007848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.007871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.008047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.008074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.008199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.008226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.008396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.008433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.008602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.008630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.008751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.008778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.008930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.008959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.009122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.009146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.009281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.009337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.009503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.009534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.009666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.009694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.009867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.009891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.569 [2024-07-25 09:41:24.010068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.569 [2024-07-25 09:41:24.010096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.569 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.010256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.010283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.010414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.010443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.010560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.010584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.010713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.010736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.010909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.010936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.011087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.011114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.011242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.011286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.011444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.011468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.011625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.011666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.011784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.011811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.011940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.011963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.012091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.012114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.012214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.012238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.012394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.012419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.012522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.012545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.012719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.012742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.012874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.012898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.013039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.013185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.013332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.013509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.013677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.013859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.013984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.014130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.014302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.014448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.014586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.014748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.014900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.014927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.015088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.015229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.015392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.015560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.015740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.015884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.015998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.016022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.016165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.570 [2024-07-25 09:41:24.016189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.570 qpair failed and we were unable to recover it. 00:26:51.570 [2024-07-25 09:41:24.016336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.016367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.016486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.016510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.016651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.016676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.016764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.016789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.016884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.016909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.017937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.017962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.018081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.018106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.018249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.018288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.018435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.018460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.018579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.018603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.018717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.018742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.018856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.018880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.019922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.019947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.020038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.020062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.020217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.020242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.020383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.020408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.020521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.020545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.020701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.020725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.020868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.020892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.021851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.021875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.571 [2024-07-25 09:41:24.022002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.571 [2024-07-25 09:41:24.022027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.571 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.022139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.022164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.022316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.022340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.022460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.022484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.022608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.022632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.022753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.022778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.022887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.022911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.023945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.023970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.024113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.024137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.024282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.024307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.024461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.024486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.024605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.024629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.024749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.024773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.024939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.024963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.025934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.025959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.026105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.026129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.026242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.026267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.026413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.026438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.026586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.026611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.026756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.026780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.026891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.026916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.027054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.027079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.027226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.027250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.027395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.027420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.572 [2024-07-25 09:41:24.027531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.572 [2024-07-25 09:41:24.027556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.572 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.027670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.027695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.027824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.027848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.027962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.027986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.028143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.028168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.028307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.028332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.028461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.028485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.028602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.028627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.028744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.028768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.028886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.028911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.029082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.029245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.029363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.029545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.029667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.029834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.029975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.030160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.030316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.030492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.030636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.030806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.030947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.030972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.031161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.031279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.031444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.031555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.031744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.031879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.031999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.032167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.032305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.032480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.032658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.032795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.032963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.032988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.573 [2024-07-25 09:41:24.033108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.573 [2024-07-25 09:41:24.033132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.573 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.033246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.033271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.033420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.033445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.033591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.033616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.033733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.033757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.033879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.033903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.034921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.034945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.035120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.035233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.035394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.035510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.035656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.035826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.035987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.036090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.036233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.036405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.036557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.036709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.036894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.036919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.037064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.037103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.037239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.037263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.037389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.037414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.037558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.037583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.037724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.037748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.037867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.037891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.038889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.574 [2024-07-25 09:41:24.038913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.574 qpair failed and we were unable to recover it. 00:26:51.574 [2024-07-25 09:41:24.039071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.039191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.039328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.039522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.039694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.039865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.039964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.039989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.040109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.040149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.040271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.040295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.040410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.040436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.040556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.040581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.040700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.040742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.040895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.040919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.041920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.041944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.042951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.042976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.043124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.043149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.043290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.043314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.043445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.043470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.043573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.043598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.043695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.043720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.043869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.043893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.044022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.044045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.044176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.044215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.044323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.044347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.044480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.044504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.575 [2024-07-25 09:41:24.044628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-25 09:41:24.044656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.575 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.044734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.044759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.044899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.044924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.045040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.045065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.045185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.045210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.045329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.045353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.045503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.045528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.045673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.045712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.045853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.045877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.046910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.046935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.047071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.047095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.047247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.047393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.047419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.047565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.047589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.047736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.047763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.047865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.047892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.048079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.048103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.048242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.048265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.048393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.048418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.048537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.048561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.048703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.048728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.048815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.048854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.049918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.049943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.050108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.050133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.050254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.050278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.050423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.050448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.050594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-25 09:41:24.050619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.576 qpair failed and we were unable to recover it. 00:26:51.576 [2024-07-25 09:41:24.050706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.050731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.050844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.050869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.051956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.051980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.052145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.052168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.052304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.052328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.052494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.052519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.052642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.052665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.052775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.052798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.052914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.052938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.053076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.053181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.053384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.053551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.053700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.053869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.577 qpair failed and we were unable to recover it. 00:26:51.577 [2024-07-25 09:41:24.053989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.577 [2024-07-25 09:41:24.054028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.054163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.054187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.054331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.054362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.054507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.054531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.054659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.054683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.054791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.054816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.054934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.054959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.055084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.055107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.055266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.055295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.055440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.055466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.055622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.055647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.055773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.055813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.055936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.055963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.056113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.056141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.056283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.056308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.056470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.056495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.056634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.056659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.056747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.056772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.056914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.056938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.057044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.057084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.057205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.057229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.057353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.057383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.057549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.057604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.057754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.057783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.057899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.057931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.058057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.058101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.578 qpair failed and we were unable to recover it. 00:26:51.578 [2024-07-25 09:41:24.058250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.578 [2024-07-25 09:41:24.058282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.058442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.058469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.058595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.058622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.058769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.058809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.058973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.059146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.059291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.059443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.059594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.059764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.059932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.059956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.060083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.060107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.060218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.060264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.060391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.060419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.060579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.060606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.060755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.060800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.060966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.060995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.061100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.061127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.061285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.061312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.061448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.061474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.061594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.061621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.061746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.061774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.061922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.061950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.062106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.062134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.062247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.062275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.062402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.062434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.579 qpair failed and we were unable to recover it. 00:26:51.579 [2024-07-25 09:41:24.062566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.579 [2024-07-25 09:41:24.062609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.062716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.062746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.062920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.062948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.063101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.063128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.063248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.063281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.063441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.063468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.063574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.063625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.063754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.063795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.063927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.063966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.064134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.064159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.064310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.064342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.064501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.064533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.064644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.064672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.064820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.064846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.064978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.065102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.065284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.065494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.065631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.065776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.065946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.065991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.066132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.066159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.066314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.066340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.066453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.066479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.066605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.066646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.066790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.580 [2024-07-25 09:41:24.066816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.580 qpair failed and we were unable to recover it. 00:26:51.580 [2024-07-25 09:41:24.066979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.067118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.067305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.067494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.067643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.067801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.067952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.067997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.068118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.068158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.068249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.068280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.068435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.068463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.068609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.068651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.068816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.068848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.069007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.069033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.069156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.069185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.069329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.069361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.069522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.069566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.069737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.069785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.069900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.069943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.070035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.070067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.070225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.070252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.070385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.070412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.070540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.070583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.070743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.070789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.070894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.070922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.071063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.071095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.071213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.581 [2024-07-25 09:41:24.071239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.581 qpair failed and we were unable to recover it. 00:26:51.581 [2024-07-25 09:41:24.071349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.071393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.071519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.071545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.071691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.071718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.071857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.071883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.072034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34230 is same with the state(5) to be set 00:26:51.582 [2024-07-25 09:41:24.072222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.072262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.072416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.072445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.072569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.072594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.072746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.072772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.072924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.072950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.073073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.073098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.073241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.073267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.073413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.073444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.073561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.073586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.073701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.073730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.073877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.073904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.074058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.074087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.074235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.074263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.074369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.074396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.074537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.074563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.074667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.074710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.074840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.074888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.075034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.075076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.075198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.075224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.075334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.075365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.075482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.075507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.582 qpair failed and we were unable to recover it. 00:26:51.582 [2024-07-25 09:41:24.075659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.582 [2024-07-25 09:41:24.075685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.075813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.075837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.075966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.075991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.076106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.076152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.076301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.076332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.076478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.076505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.076629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.076807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.076851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.076975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.077112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.077291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.077441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.077587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.077740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.077909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.077933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.078086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.078131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.078249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.078276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.078408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.078437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.078573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.078617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.078765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.078812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.078941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.078984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.079109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.079149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.079301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.079326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.079462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.079487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.079606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.079631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.079776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.583 [2024-07-25 09:41:24.079800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.583 qpair failed and we were unable to recover it. 00:26:51.583 [2024-07-25 09:41:24.079922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.079951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.080106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.080151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.080272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.080298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.080425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.080452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.080609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.080637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.080781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.080825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.080938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.080981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.081080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.081106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.081250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.081274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.081397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.081421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.081551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.081576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.081716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.081741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.081885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.081911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.082965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.584 [2024-07-25 09:41:24.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.584 qpair failed and we were unable to recover it. 00:26:51.584 [2024-07-25 09:41:24.083109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.083137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.083232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.083259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.083443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.083471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.083573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.083759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.083803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.083928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.083971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.084132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.084175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.084303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.084328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.084470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.084500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.084605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.084634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.084765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.084793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.084912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.084940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.085034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.085062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.085180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.085208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.085387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.085430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.085599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.085648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.085815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.085859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.085990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.086040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.086189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.086215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.086338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.086394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.086563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.086610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.086749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.086793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.086913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.086942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.087089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.087122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.087272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.087298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.585 qpair failed and we were unable to recover it. 00:26:51.585 [2024-07-25 09:41:24.087418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.585 [2024-07-25 09:41:24.087463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.087574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.087602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.087774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.087818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.087976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.088004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.088168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.088194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.088344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.088381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.088497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.088542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.088667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.088709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.088855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.088898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.089048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.089080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.089233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.089258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.089381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.089409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.089543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.089585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.089742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.089785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.089943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.089972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.090096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.090136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.090279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.090306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.090447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.090492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.090614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.090642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.090771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.090811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.090950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.090976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.091089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.091120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.091251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.091278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.091427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.091455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.091549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.091575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.091674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.586 [2024-07-25 09:41:24.091700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.586 qpair failed and we were unable to recover it. 00:26:51.586 [2024-07-25 09:41:24.091815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.091841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.091972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.091999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.092147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.092173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.092321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.092346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.092476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.092503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.092623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.092649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.092773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.092815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.092981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.093007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.093137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.093163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.093320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.093351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.093501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.093527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.093693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.093734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.093861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.093888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.094964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.094992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.095175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.095220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.095366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.095398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.095553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.095579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.095754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.095783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.095909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.095937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.096090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.587 [2024-07-25 09:41:24.096118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.587 qpair failed and we were unable to recover it. 00:26:51.587 [2024-07-25 09:41:24.096222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.096247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.096393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.096419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.096514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.096539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.096639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.096664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.096808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.096833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.096920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.096945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.097135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.097166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.097335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.097368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.097477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.097503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.097660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.097708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.097886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.097929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.098062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.098097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.098264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.098291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.098432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.098459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.098604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.098628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.098792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.098820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.098935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.098963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.099084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.099112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.099213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.099241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.099412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.099438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.099559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.099585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.099736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.099760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.099897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.099922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.100067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.100098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.100244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.100272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.100432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.588 [2024-07-25 09:41:24.100472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.588 qpair failed and we were unable to recover it. 00:26:51.588 [2024-07-25 09:41:24.100621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.100646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.100784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.100812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.100958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.100986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.101104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.101131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.101286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.101325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.101470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.101499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.101652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.101679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.101835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.101878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.102917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.102942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.103055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.103080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.103208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.103237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.103364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.103393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.103533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.103558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.103719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.103746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.103893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.103919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.104056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.104097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.104248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.104276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.104450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.104494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.104618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.104648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.589 qpair failed and we were unable to recover it. 00:26:51.589 [2024-07-25 09:41:24.104782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.589 [2024-07-25 09:41:24.104823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.104962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.104988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.105137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.105163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.105334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.105368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.105534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.105578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.105719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.105747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.105920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.105947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.106070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.106096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.106215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.106255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.106412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.106469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.106633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.106663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.106794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.106823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.106995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.107164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.107346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.107501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.107645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.107766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.107913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.107960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.108098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.108142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.108236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.108265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.108435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.108479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.108625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.108666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.108789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.108828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.108971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.108997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.109113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.590 [2024-07-25 09:41:24.109139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.590 qpair failed and we were unable to recover it. 00:26:51.590 [2024-07-25 09:41:24.109262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.109287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.109408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.109434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.109546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.109572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.109696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.109722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.109838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.109863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.109985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.110124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.110258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.110409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.110586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.110749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.110915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.110942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.111067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.111095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.111236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.111274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.111409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.111436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.111525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.111558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.111677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.111724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.111863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.111892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.112057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.112099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.112239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.112264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.591 [2024-07-25 09:41:24.112400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.591 [2024-07-25 09:41:24.112425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.591 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.112585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.112610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.112763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.112816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.112942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.112970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.113133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.113161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.113311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.113336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.113498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.113531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.113687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.113729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.113850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.113904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.114041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.114082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.114212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.114236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.114333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.114381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.114513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.114537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.114677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.114700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.114824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.114847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.115012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.115036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.115194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.115218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.115363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.115415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.115581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.115607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.115758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.115782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.115906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.115930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.116086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.116110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.116272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.116297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.116473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.116499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.116620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.116643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.116816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.116840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.592 [2024-07-25 09:41:24.116956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.592 [2024-07-25 09:41:24.116984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.592 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.117127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.117150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.117281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.117305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.117438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.117465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.117622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.117646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.117806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.117829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.117970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.118009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.118150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.118174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.118333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.118379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.118522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.118546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.118688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.118728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.118886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.118914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.119047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.119089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.119239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.119267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.119370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.119412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.119529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.119553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.119683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.119724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.119874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.119902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.120074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.120102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.120237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.120264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.120428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.120458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.120553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.120578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.120700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.120724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.120867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.120895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.121062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.121090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.121241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.121269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.593 qpair failed and we were unable to recover it. 00:26:51.593 [2024-07-25 09:41:24.121428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.593 [2024-07-25 09:41:24.121468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.121581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.121605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.121745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.121786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.121903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.121931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.122099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.122127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.122215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.122243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.122415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.122440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.122572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.122595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.122720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.122744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.122875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.122903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.123028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.123071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.123232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.123259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.123416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.123441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.123599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.123622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.123793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.123821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.123969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.123997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.124126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.124170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.124322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.124350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.124533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.124557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.124709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.124733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.124866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.124906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.125066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.125094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.125243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.125266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.125389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.125414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.125526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.125554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.125671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.594 [2024-07-25 09:41:24.125710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.594 qpair failed and we were unable to recover it. 00:26:51.594 [2024-07-25 09:41:24.125855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.125895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.126044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.126202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.126362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.126501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.126667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.126807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.126973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.127000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.127169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.127195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.127291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.127315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.127488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.127517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.127647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.127686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.127831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.127872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.127999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.128027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.128147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.128170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.128306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.128331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.128476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.128506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.128651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.128676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.128856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.128884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.129009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.129038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.129154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.129194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.129317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.129345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.129524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.129549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.129698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.129735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.129876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.595 [2024-07-25 09:41:24.129905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.595 qpair failed and we were unable to recover it. 00:26:51.595 [2024-07-25 09:41:24.130020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.130048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.130201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.130224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.130335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.130381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.130565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.130593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.130732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.130755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.130915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.130956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.131051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.131079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.131185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.131209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.131382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.131423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.131550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.131579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.131708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.131732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.131840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.131864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.132018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.132060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.132217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.132241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.132370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.132410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.132559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.132587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.132704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.132728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.132871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.132895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.133040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.133082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.133215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.133239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.133429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.133458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.133585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.133614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.133747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.133771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.133942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.133989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.134126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.596 [2024-07-25 09:41:24.134154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.596 qpair failed and we were unable to recover it. 00:26:51.596 [2024-07-25 09:41:24.134274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.134316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.134455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.134480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.134591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.134616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.134803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.134826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.134985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.135013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.135162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.135190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.135318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.135342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.135509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.135551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.135700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.135728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.135860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.135898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.136074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.136102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.136201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.136229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.136401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.136426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.136597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.136626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.136749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.136777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.136945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.136983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.137136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.137164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.137318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.137346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.137460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.137485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.137619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.137643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.137827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.137855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.137990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.138014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.138121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.138144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.138282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.138322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.138431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.138457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.597 [2024-07-25 09:41:24.138579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.597 [2024-07-25 09:41:24.138602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.597 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.138745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.138773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.138894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.138933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.139075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.139100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.139247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.139387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.139411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.139570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.139593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.139763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.139792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.139947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.139970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.140147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.140175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.140299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.140327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.140481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.140506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.140653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.140676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.140811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.140843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.140997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.141035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.141164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.141206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.141363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.141405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.141547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.141572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.141691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.141733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.141828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.141856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.598 qpair failed and we were unable to recover it. 00:26:51.598 [2024-07-25 09:41:24.141977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.598 [2024-07-25 09:41:24.142001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.142160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.142199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.142290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.142319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.142460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.142485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.142591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.142615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.142776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.142804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.142960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.142984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.143126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.143167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.143293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.143321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.143469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.143494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.143611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.143651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.143807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.143835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.143961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.143984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.144115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.144138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.144268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.144296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.144455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.144481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.144625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.144668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.144768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.144796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.144963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.144987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.145145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.145173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.145309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.145342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.145454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.145478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.145621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.145659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.145817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.145846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.146023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.146046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.599 [2024-07-25 09:41:24.146203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.599 [2024-07-25 09:41:24.146231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.599 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.146382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.146423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.146582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.146605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.146743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.146771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.146920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.146948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.147078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.147116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.147233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.147257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.147360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.147389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.147548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.147573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.147705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.147743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.147907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.147935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.148088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.148126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.148255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.148296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.148414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.148443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.148545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.148569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.148692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.148715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.148862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.148889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.149056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.149079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.149203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.149245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.149348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.149384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.149501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.149525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.149653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.149676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.149867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.149895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.150046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.150083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.150242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.150271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.150387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.150416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.600 qpair failed and we were unable to recover it. 00:26:51.600 [2024-07-25 09:41:24.150520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.600 [2024-07-25 09:41:24.150545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.150692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.150716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.150816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.150843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.150967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.150991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.151157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.151198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.151316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.151344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.151464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.151488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.151574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.151599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.151738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.151766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.151865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.151892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.152062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.152086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.152250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.152278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.152401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.152441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.152541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.152564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.152735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.152764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.152922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.152945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.153078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.153119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.153248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.153277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.153440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.153478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.153586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.153627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.153755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.153784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.153897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.153920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.154075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.154099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.154267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.154296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.154424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.154464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.154592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.154616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.154762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.154790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.154885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.601 [2024-07-25 09:41:24.154909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.601 qpair failed and we were unable to recover it. 00:26:51.601 [2024-07-25 09:41:24.155077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.155101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.155242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.155270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.155428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.155454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.155610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.155651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.155804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.155832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.155963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.156001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.156174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.156202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.156294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.156322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.156501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.156526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.156661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.156684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.156800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.156828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.156987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.157011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.157176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.157204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.157340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.157378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.157517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.157541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.157659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.157683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.157800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.157828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.157984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.158008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.158185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.158213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.158370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.158399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.158555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.158579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.158708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.158752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.158890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.158918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.159022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.159046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.159187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.159211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.159334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.159380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.159521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.159546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.602 [2024-07-25 09:41:24.159663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.602 [2024-07-25 09:41:24.159686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.602 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.159846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.159874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.160005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.160044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.160217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.160246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.160388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.160418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.160552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.160576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.160708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.160731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.160912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.160941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.161068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.161091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.161233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.161257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.161379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.161407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.161546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.161571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.161716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.161756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.161886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.161914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.162033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.162072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.162219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.162260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.162366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.162409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.162535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.162560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.162687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.162712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.162850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.162878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.163034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.163237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.163410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.163547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.163689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.163838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.163996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.164020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.164145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.164184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.603 [2024-07-25 09:41:24.164334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.603 [2024-07-25 09:41:24.164372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.603 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.164483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.164507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.164660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.164684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.164817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.164846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.164955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.164979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.165138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.165162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.165331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.165373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.165480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.165504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.165597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.165621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.165788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.165812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.165944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.165967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.166093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.166116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.166300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.166328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.166487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.166511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.166616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.166654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.166796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.166824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.166980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.167018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.167182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.167210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.167304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.167333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.167469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.167494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.167617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.167658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.167819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.167847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.167994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.168032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.168157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.168197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.604 [2024-07-25 09:41:24.168346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.604 [2024-07-25 09:41:24.168384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.604 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.168484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.168508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.168639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.168662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.168776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.168800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.168947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.168971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.169100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.169124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.169287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.169485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.169509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.169634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.169674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.169777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.169805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.169959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.169983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.170109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.170133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.170251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.170279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.170406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.170432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.170554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.170578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.170749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.170777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.170938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.170960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.171131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.171159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.171291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.171319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.171444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.171469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.171564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.171587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.171728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.171756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.171864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.171891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.172041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.172064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.172200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.172228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.172385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.172423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.172528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.172552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.172681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.172709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.172844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.172867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.173006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.173046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.605 [2024-07-25 09:41:24.173170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.605 [2024-07-25 09:41:24.173198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.605 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.173329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.173376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.173555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.173579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.173703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.173730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.173888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.173926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.174035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.174074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.174201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.174230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.174345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.174392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.174525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.174549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.174669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.174697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.174853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.174891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.175074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.175228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.175423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.175543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.175682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.175856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.175986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.176010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.176154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.176182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.176289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.176313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.176486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.176511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.176647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.176672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.176819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.177053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.177204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.177351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.177532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.177737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.177876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.177997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.178021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.178132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.606 [2024-07-25 09:41:24.178160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.606 qpair failed and we were unable to recover it. 00:26:51.606 [2024-07-25 09:41:24.178268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.178307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.178437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.178465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.178587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.178615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.178748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.178789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.178935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.178977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.179095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.179123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.179252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.179294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.179432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.179457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.179560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.179585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.179678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.179702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.179871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.179909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.180044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.180073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.180227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.180251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.180407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.180432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.180577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.180605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.180737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.180776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.180915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.180939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.181045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.181073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.181207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.181231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.181351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.181381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.181505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.181547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.181672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.181711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.181847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.181871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.182010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.182039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.182160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.182184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.182306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.182330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.182469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.182497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.182637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.182676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.182816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.182858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.183010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.607 [2024-07-25 09:41:24.183038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.607 qpair failed and we were unable to recover it. 00:26:51.607 [2024-07-25 09:41:24.183169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.183207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.183329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.183491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.183520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.183626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.183650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.183784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.183807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.183950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.183990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.184141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.184165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.184340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.184417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.184565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.184590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.184725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.184762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.184871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.184911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.185940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.185968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.186135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.186286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.186430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.186677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.186873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.186983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.187007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.187124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.187148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.187292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.187320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.187438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.187464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.187609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.187648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.187780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.608 [2024-07-25 09:41:24.187808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.608 qpair failed and we were unable to recover it. 00:26:51.608 [2024-07-25 09:41:24.187927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.187951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.188073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.188097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.188250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.188294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.188405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.188431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.188511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.188536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.188657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.188685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.188848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.188871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.189955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.189979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.190150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.190178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.190283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.190306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.190465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.190490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.190601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.190644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.190782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.190820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.190993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.191021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.191145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.191173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.191302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.191331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.191479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.191519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.191643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.191671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.191843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.191865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.191992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.192033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.192133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.192160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.192290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.192314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.192448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.192473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.192613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.192642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.192770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.609 [2024-07-25 09:41:24.192808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.609 qpair failed and we were unable to recover it. 00:26:51.609 [2024-07-25 09:41:24.192905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.192928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.193064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.193105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.193247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.193275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.193414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.193439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.193558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.193583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.193675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.193714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.193847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.193888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.194084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.194252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.194528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.194687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.194836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.194972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.195124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.195293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.195416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.195549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.195717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.195915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.195945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.196108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.196136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.196237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.196262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.196375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.196418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.196545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.196569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.196704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.196728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.196897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.196926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.610 qpair failed and we were unable to recover it. 00:26:51.610 [2024-07-25 09:41:24.197048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.610 [2024-07-25 09:41:24.197076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.197206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.197248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.197380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.197422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.197547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.197572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.197714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.197752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.197859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.197899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.198859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.198883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.199058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.199182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.199344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.199519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.199655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.199845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.199985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.200176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.200339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.200463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.200582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.200769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.200923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.200972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.201129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.201282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.201432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.201555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.201704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.201862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.201992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.202112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.202259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.202416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.202557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.202690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.202877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.202906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.203926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.203971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.204095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.611 [2024-07-25 09:41:24.204123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.611 qpair failed and we were unable to recover it. 00:26:51.611 [2024-07-25 09:41:24.204271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.204298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.204443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.204482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.204583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.204610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.204707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.204736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.204911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.204954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.205095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.205123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.205242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.205266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.205393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.205419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.205536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.205560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.205723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.205751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.205862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.205886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.206034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.206062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.206201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.206229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.206400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.206425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.206518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.206542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.206665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.206693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.206861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.206889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.207948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.207975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.208066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.208093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.208222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.208253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.208422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.208460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.208564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.208590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.208684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.208709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.208853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.208877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.209875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.209913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.210060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.210088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.210218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.210247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.210401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.210426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.210532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.210556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.210688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.210729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.210853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.210882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.211956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.211984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.612 qpair failed and we were unable to recover it. 00:26:51.612 [2024-07-25 09:41:24.212115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.612 [2024-07-25 09:41:24.212144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.212270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.212298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.212406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.212436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.212522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.212548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.212657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.212686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.212831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.212860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.212990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.213169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.213375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.213514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.213670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.213826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.213973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.213996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.214104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.214133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.214262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.214304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.214423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.214449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.214550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.214575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.214688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.214711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.214892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.214921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.215067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.215256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.215432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.215540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.215884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.215999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.216187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.216314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.216478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.216605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.216757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.216912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.216941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.217940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.217981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.218106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.218134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.218261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.218289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.218407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.218432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.218524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.218552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.218654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.218682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.218859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.218882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.219018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.219059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.219224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.219252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.219408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.219433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.219530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.219555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.219718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.219746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.613 [2024-07-25 09:41:24.219906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.613 [2024-07-25 09:41:24.219929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.613 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.220955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.220994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.221123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.221151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.221259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.221284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.221421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.221460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.221584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.221613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.221747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.221771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.221906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.221930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.222070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.222098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.222246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.222274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.222434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.222459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.222558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.222582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.222691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.222715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.222850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.222891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.223887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.223915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.224075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.224099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.224201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.224225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.224371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.224400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.224503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.224528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.224653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.614 [2024-07-25 09:41:24.224677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.614 qpair failed and we were unable to recover it. 00:26:51.614 [2024-07-25 09:41:24.224809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.224841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.225879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.225917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.226922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.226946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.227051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.227079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.227205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.227247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.227374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.227417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.227521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.227545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.227663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.227687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.227827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.227869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.228963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.228997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.229127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.229152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.229291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.229314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.229448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.229472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.229569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.229593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.229735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.229758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.229890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.229917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.230017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.230041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.230212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.230266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.230417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.230448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.230560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.230585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.230712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.230736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.230854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.230883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.231951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.615 [2024-07-25 09:41:24.231975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-07-25 09:41:24.232093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.232239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.232371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.232501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.232667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.232819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.232960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.232984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.233122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.233162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.233277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.233316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.233439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.233465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.233588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.233613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.233754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.233793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.233898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.233922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.234093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.234121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.234236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.234276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.234395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.234421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.234532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.234560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.234716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.234754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.234850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.234873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.235967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.235996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.236125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.236149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.236277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.236301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.236445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.236470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.236567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.236591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.236714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.236840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.236868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.237896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.237920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.238093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.238136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.238241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.238269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.238377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.238402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.238502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.238526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.238661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.238689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.238855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.238878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.239015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.239057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-07-25 09:41:24.239174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.616 [2024-07-25 09:41:24.239202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.239329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.239373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.239487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.239513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.239654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.239682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.239796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.239819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.239921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.239945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.240118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.240147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.240319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.240347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.240459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.240484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.240569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.240594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.240738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.240762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.240946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.240999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.241163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.241192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.241284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.241309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.241453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.241497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.241614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.241643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.241750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.241775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.241911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.241935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.242079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.242119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.242245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.242269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.242396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.242435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.242542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.242569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.242705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.242729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.242886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.242923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.243946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.243974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.244099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.244123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.244250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.244451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.244478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.244575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.244599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.244737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.244761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.244871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.244899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.245971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.245996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.246112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.617 [2024-07-25 09:41:24.246137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-07-25 09:41:24.246273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.246313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.246451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.246476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.246557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.246582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.246738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.246765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.246889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.246928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.247927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.247966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.248118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.248145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.248275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.248321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.248438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.248463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.248541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.248565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.248694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.248717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.248905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.248933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.249089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.249117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.249285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.249308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.249410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.249435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.249550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.249577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.249706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.249730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.249863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.249886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.250060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.250211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.250425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.250585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.250725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.250879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.250996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.251184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.251310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.251458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.251603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.251727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.251927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.251955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.252078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.252118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.618 qpair failed and we were unable to recover it. 00:26:51.618 [2024-07-25 09:41:24.252202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.618 [2024-07-25 09:41:24.252226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.252333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.252373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.252490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.252514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.252609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.252633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.252803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.252831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.252946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.252969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.253945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.253973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.254093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.254121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.254290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.254318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.254448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.254473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.254566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.254590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.254752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.254789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.254923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.254951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.255080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.255240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.255264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.255389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.255414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.255528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.255556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.255727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.255764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.255869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.255909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.256041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.256070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.256227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.256267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.256415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.256457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.256568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.256596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.256706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.256730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.256848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.256871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.257065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.257254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.257396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.257529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.257665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.257841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.257971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.258121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.258300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.258444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.258558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.258699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.619 [2024-07-25 09:41:24.258859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.619 [2024-07-25 09:41:24.258885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.619 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.259892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.259917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.260925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.260950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.261072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.261253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.261451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.261600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.261763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.261877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.261990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.262166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.262348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.262496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.262636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.262804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.262940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.262965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.263915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.263940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.264087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.264112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.264235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.264260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.620 [2024-07-25 09:41:24.264352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.620 [2024-07-25 09:41:24.264386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.620 qpair failed and we were unable to recover it. 00:26:51.913 [2024-07-25 09:41:24.264479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.913 [2024-07-25 09:41:24.264505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.913 qpair failed and we were unable to recover it. 00:26:51.913 [2024-07-25 09:41:24.264649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.264674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.264822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.264856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.264951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.264981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.265118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.265154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.265323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.265350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.265465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.265491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.265609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.265638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.265788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.265814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.265948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.265973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.266960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.266985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.267974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.267999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.268943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.268968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.269958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.269983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.914 [2024-07-25 09:41:24.270155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.914 [2024-07-25 09:41:24.270195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.914 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.270306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.270334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.270473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.270501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.270633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.270659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.270813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.270858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.270984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.271026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.271180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.271207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.271354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.271386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.271507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.271532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.271680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.271706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.271848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.271873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.272896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.272922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.273107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.273262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.273413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.273542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.273695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.273839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.273974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.274141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.274262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.274427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.274549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.274736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.274896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.274924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.275105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.275257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.275397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.275522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.275674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-07-25 09:41:24.275820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.915 qpair failed and we were unable to recover it. 00:26:51.915 [2024-07-25 09:41:24.275968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.275993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.276137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.276162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.276294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.276322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.276482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.276509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.276597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.276622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.276770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.276796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.276915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.276940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.277083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.277108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.277225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.277253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.277363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.277391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.277511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.277536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.277654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.277833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.277859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.278909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.278934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.279075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.279103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.279232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.279260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.279408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.279434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.279527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.279553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.279672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.279697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.279853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.279878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.280025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.280083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.280205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.280251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.280367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.280395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.280489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.280515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.280663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.280707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.280839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.280866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.281022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.281049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.281174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.281199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.281292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.281317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.281426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.281451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.281570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.281595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-07-25 09:41:24.281751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-07-25 09:41:24.281776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.281894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.281922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.282014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.282042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.282169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.282197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.282332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.282364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.282491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.282516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.282660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.282700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.282837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.282867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.283059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.283248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.283383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.283513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.283687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.283860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.283981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.284962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.284987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.285152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.285180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.285311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.285338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.285485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.285510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.285650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.285675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.285811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.285836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.285962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.285987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.286094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.286121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.286243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.286272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.286422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.286448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.286554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.286580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.286698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.286723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.286851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.286891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.287031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.287060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.287203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.287231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.287335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-07-25 09:41:24.287372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-07-25 09:41:24.287496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.287521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.287625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.287650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.287768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.287792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.287936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.287963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.288052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.288081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.288181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.288208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.288377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.288424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.288523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.288549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.288709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.288754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.288906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.288945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.289125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.289154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.289261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.289287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.289428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.289455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.289552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.289577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.289705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.289730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.289879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.289907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.290919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.290947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.291941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.918 [2024-07-25 09:41:24.291966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-07-25 09:41:24.292111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.292139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.292290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.292318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.292441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.292467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.292564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.292590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.292709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.292735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.292881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.292920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.293051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.293202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.293402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.293519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.293642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.293825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.293990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.294166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.294320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.294492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.294617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.294763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.294925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.294953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.295081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.295114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.295264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.295292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.295429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.295466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.295597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.295631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.295799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.295832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.295962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.295994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.296130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.296163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.296311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.296343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.296452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.296479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.296603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.296629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.296784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.296812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.296965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.296993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.297117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.297267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.297424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.297535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.297671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.297830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.919 [2024-07-25 09:41:24.297991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-25 09:41:24.298016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.919 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.298163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.298187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.298343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.298378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.298474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.298500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.298610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.298635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.298778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.298803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.298950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.299111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.299287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.299465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.299592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.299705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.299849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.299981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.300953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.300978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.301113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.301140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.301299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.301332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.301453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.301480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.301572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.301597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.301740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.301764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.301910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.301935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.302145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.302292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.302468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.302590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.302708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.302819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.302981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.303008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.303137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.303165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.303291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.303320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.303467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.303496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.303599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.303625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.920 qpair failed and we were unable to recover it. 00:26:51.920 [2024-07-25 09:41:24.303808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-25 09:41:24.303850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.304940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.304968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.305096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.305124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.305267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.305299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.305412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.305438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.305592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.305635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.305770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.305800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.305936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.305996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.306164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.306213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.306339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.306374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.306505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.306530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.306690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.306715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.306833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.306861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.307070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.307217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.307394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.307518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.307660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.307830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.307975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.308123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.308251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.308376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.308545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.308690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.308860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.308885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.309013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.309038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.309173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.309201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.309350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.309403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.309520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.309545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.309659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.309684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.309841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-25 09:41:24.309867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.921 qpair failed and we were unable to recover it. 00:26:51.921 [2024-07-25 09:41:24.310010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.310034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.310193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.310221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.310403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.310430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.310556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.310581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.310745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.310769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.310886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.310912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.311056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.311081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.311214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.311241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.311415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.311454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.311585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.311611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.311757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.311782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.311936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.311961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.312084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.312108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.312246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.312279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.312433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.312459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.312585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.312610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.312727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.312752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.312901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.312926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.313947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.313972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.314100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.314128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.314267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.314310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.314478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.314506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.314665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.314690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.314791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.314816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.314932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.314957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.315092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.315120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.315280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.315307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.315448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.315594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.315618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.315760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.315785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.315890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.315915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.316035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.316062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.922 qpair failed and we were unable to recover it. 00:26:51.922 [2024-07-25 09:41:24.316184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.922 [2024-07-25 09:41:24.316227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.316334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.316378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.316550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.316582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.316707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.316733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.316855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.316881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.317002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.317029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.317181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.317209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.317366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.317433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.317545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.317572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.317701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.317727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.317885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.317928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.318074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.318119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.318276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.318304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.318470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.318509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.318605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.318632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.318764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.318792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.318914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.318942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.319064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.319111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.319236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.319263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.319444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.319483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.319604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.319631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.319727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.319753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.319885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.319911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.320952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.320984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.321141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.321171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.923 [2024-07-25 09:41:24.321285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.923 [2024-07-25 09:41:24.321313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.923 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.321441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.321468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.321569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.321595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.321782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.321812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.321928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.321956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.322112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.322175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.322320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.322373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.322558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.322723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.322751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.322902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.322930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.323064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.323126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.323313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.323341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.323495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.323693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.323718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.323879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.323904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.324063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.324191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.324373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.324560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.324738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.324862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.324974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.325096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.325250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.325425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.325588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.325769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.325945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.325987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.326160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.326203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.326327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.326353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.326463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.326489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.326614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.326653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.326791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.326818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.326950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.326978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.327080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.327108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.327260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.327287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.327427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.327452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.327571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.327596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.327705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.924 [2024-07-25 09:41:24.327729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.924 qpair failed and we were unable to recover it. 00:26:51.924 [2024-07-25 09:41:24.327902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.327930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.328055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.328083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.328203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.328230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.328345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.328380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.328540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.328565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.328709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.328734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.328886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.328913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.329950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.329979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.330102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.330127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.330255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.330283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.330427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.330452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.330540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.330565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.330713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.330737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.330895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.330919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.331076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.331101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.331225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.331252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.331377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.331418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.331560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.331585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.331718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.331742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.331868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.331892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.332947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.332972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.333133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.333160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.333255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.333283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.333403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.333428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.333572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.333597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.333743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.925 [2024-07-25 09:41:24.333768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.925 qpair failed and we were unable to recover it. 00:26:51.925 [2024-07-25 09:41:24.333845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.333869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.334896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.334923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.335083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.335110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.335238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.335265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.335372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.335397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.335542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.335567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.335712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.335736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.335852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.335876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.336033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.336056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.336212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.336237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.336338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.336388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.336547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.336575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.336728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.336767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.336910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.336937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.337088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.337114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.337219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.337247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.337370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.337424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.337573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.337599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.337749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.337773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.337893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.337918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.338014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.338039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.338202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.338229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.338416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.338455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.338613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.338656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.338779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.338805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.338952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.338978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.339076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.339101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.339236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.339264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.339371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.339415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.339540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.339565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.339684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.339708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.339839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.926 [2024-07-25 09:41:24.339862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.926 qpair failed and we were unable to recover it. 00:26:51.926 [2024-07-25 09:41:24.340018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.340159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.340318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.340500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.340645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.340785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.340934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.340959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.341090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.341118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.341233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.341261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.341422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.341448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.341567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.341593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.341735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.341761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.341872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.341897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.342026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.342054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.342169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.342212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.342342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.342411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.342512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.342538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.342655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.342680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.342827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.342857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.343018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.343046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.343145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.343172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.343297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.343325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.343520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.343560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.343699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.343725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.343873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.343898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.344067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.344216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.344414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.344587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.344724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.344865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.344983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.345008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.927 qpair failed and we were unable to recover it. 00:26:51.927 [2024-07-25 09:41:24.345146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.927 [2024-07-25 09:41:24.345174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.345299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.345327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.345471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.345497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.345586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.345611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.345736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.345760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.345884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.345909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.346066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.346094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.346246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.346274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.346434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.346460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.346607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.346632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.346747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.346772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.346893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.346919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.347080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.347108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.347231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.347263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.347423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.347450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.347571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.347596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.347774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.347799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.347948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.347974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.348120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.348149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.348270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.348298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.348456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.348482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.348643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.348667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.348785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.348810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.348940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.348965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.349108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.349136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.349253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.349281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.349379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.349422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.349573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.349599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.349713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.349738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.349822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.349847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.350038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.350189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.350352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.350536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.350706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.350818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.350978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.928 [2024-07-25 09:41:24.351006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.928 qpair failed and we were unable to recover it. 00:26:51.928 [2024-07-25 09:41:24.351131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.351159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.351276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.351304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.351434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.351460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.351609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.351635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.351744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.351784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.351912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.351940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.352071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.352099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.352224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.352253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.352422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.352463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.352585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.352611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.352755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.352781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.352872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.352900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.353057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.353085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.353211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.353239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.353389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.353432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.353532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.353556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.353708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.353741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.353872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.353914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.354066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.354221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.354249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.354411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.354435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.354546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.354571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.354714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.354742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.354904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.354932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.355081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.355109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.355259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.355287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.355423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.355448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.355575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.355600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.355769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.355797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.355955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.355983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.356108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.356136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.356296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.356323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.356469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.356494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.356611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.356652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.356807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.356834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.356934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.356959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.357071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.357099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.929 [2024-07-25 09:41:24.357225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-07-25 09:41:24.357253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.929 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.357418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.357457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.357608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.357635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.357757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.357782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.357934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.357977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.358108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.358151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.358294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.358333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.358471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.358498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.358657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.358684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.358856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.358888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.359028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.359074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.359168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.359195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.359376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.359415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.359579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.359606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.359755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.359783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.359950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.359983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.360151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.360182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.360327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.360362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.360525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.360550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.360668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.360697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.360799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.360824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.360956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.360983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.361133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.361160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.361289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.361316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.361506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.361545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.361701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.361727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.361863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.361891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.362051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.362078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.362239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.362267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.362411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.362437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.362583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.362608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.362737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.362761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.362882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.362907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.363049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.363079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.363205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.363248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.363413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.363439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.363582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.363607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.363776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.363801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.930 [2024-07-25 09:41:24.363919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.930 [2024-07-25 09:41:24.363961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.930 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.364096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.364124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.364254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.364296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.364430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.364455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.364574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.364599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.364769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.364798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.364941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.364969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.365121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.365148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.365275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.365306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.365423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.365448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.365591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.365616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.365756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.365783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.365905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.365933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.366050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.366078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.366197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.366224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.366346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.366382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.366541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.366566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.366690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.366728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.366888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.366915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.367048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.367075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.367241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.367268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.367404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.367430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.367581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.367606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.367767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.367790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.367959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.367986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.368112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.368139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.368272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.368313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.368457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.368482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.368627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.368666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.368809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.368847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.368963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.368986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.369153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.369180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.369336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.369381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.369517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.369542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.369696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.369723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.369871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.369893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.370020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.370043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.370188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.370216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.370347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.931 [2024-07-25 09:41:24.370393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.931 qpair failed and we were unable to recover it. 00:26:51.931 [2024-07-25 09:41:24.370528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.370553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.370679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.370707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.370830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.370866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.371012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.371050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.371174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.371202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.371334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.371384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.371522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.371547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.371688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.371715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.371862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.371885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.372013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.372037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.372169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.372200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.372335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.372383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.372545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.372570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.372725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.372752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.372912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.372939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.373093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.373121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.373269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.373296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.373429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.373469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.373589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.373614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.373782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.373809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.373939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.373979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.374066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.374093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.374223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.374250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.374407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.374432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.374552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.374577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.374722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.374750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.374882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.374919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.375016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.375039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.375160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.375183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.375318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.375360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.375529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.375554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.375673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.375701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.375854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.932 [2024-07-25 09:41:24.375877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.932 qpair failed and we were unable to recover it. 00:26:51.932 [2024-07-25 09:41:24.376044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.376072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.376211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.376254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.376407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.376435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.376562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.376587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.376726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.376760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.376913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.376937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.377109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.377291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.377468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.377591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.377720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.377866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.377994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.378017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.378134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.378164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.378317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.378363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.378493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.378518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.378667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.378695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.378827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.378850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.378994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.379035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.379183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.379211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.379387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.379432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.379554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.379580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.379698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.379726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.379886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.379910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.380045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.380086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.380240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.380271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.380403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.380428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.380542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.380567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.380730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.380758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.380890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.380913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.381069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.381108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.381229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.381263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.381433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.381459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.381554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.381579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.381707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.381735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.381841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.381865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.382007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.382031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.933 [2024-07-25 09:41:24.382167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.933 [2024-07-25 09:41:24.382196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.933 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.382329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.382373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.382487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.382512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.382677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.382705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.382835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.382858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.382983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.383007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.383148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.383176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.383335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.383368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.383510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.383534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.383664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.383691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.383856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.383880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.383985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.384009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.384175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.384202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.384300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.384324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.384491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.384516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.384678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.384701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.384879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.384901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.385064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.385208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.385376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.385560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.385715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.385877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.385978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.386001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.386142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.386170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.386321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.386345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.386522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.386549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.386669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.386696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.386861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.386884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.387011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.387051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.387176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.387203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.387382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.387407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.387569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.387596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.387694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.387721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.387858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.387881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.388046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.388083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.388178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.388205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.388332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.388376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.388503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.934 [2024-07-25 09:41:24.388543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.934 qpair failed and we were unable to recover it. 00:26:51.934 [2024-07-25 09:41:24.388671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.388699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.388866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.388888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.389017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.389059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.389206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.389234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.389363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.389402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.389525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.389550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.389681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.389709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.389875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.389897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.390928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.390969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.391087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.391114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.391261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.391284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.391400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.391425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.391587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.391614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.391739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.391777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.391882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.391906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.392046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.392073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.392229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.392252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.392423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.392456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.392574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.392602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.392770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.392792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.392915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.392955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.393052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.393080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.393195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.393218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.393350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.393379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.393540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.393567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.393685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.393723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.393841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.393865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.394006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.394034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.394189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.394212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.394381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.394409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.394558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.394586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.394708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.935 [2024-07-25 09:41:24.394745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.935 qpair failed and we were unable to recover it. 00:26:51.935 [2024-07-25 09:41:24.394875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.394898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.394992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.395020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.395138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.395166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.395318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.395345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.395474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.395498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.395650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.395674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.395810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.395838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.395971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.396127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.396309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.396487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.396664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.396821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.396968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.396996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.397128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.397152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.397310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.397351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.397491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.397520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.397638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.397677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.397805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.397829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.397969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.397997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.398147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.398171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.398299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.398323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.398518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.398562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.398673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.398697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.398801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.398825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.398927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.398951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.399108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.399132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.399292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.399334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.399469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.399498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.399663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.399686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.399855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.399883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.400034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.400062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.400222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.400250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.400372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.400415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.400535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.400559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.400686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.400724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.400863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.400891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.401026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.936 [2024-07-25 09:41:24.401054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.936 qpair failed and we were unable to recover it. 00:26:51.936 [2024-07-25 09:41:24.401208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.401232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.401371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.401415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.401540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.401568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.401691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.401729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.401848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.401872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.402965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.402992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.403123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.403146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.403249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.403273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.403441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.403471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.403571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.403596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.403677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.403701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.403827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.403855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.404039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.404214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.404379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.404541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.404690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.404854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.404989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.405013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.405171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.405212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.405340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.405375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.405526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.405551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.405686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.405727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.405877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.405905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.406011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.406034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.406137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.406160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.937 [2024-07-25 09:41:24.406269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.937 [2024-07-25 09:41:24.406296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.937 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.406456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.406481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.406625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.406666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.406827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.406854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.406976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.407012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.407108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.407131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.407265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.407292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.407462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.407487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.407624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.407665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.407785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.407812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.407971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.408124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.408262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.408413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.408604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.408778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.408956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.408979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.409110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.409133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.409296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.409323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.409484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.409508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.409622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.409662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.409790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.409818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.409978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.410889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.410994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.411130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.411285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.411461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.411629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.411799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.411952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.411992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.412108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.412136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.412270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.412294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.412424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.938 [2024-07-25 09:41:24.412449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.938 qpair failed and we were unable to recover it. 00:26:51.938 [2024-07-25 09:41:24.412577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.412605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.412773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.412795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.412924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.412964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.413114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.413142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.413297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.413320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.413448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.413487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.413580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.413608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.413725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.413748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.413880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.413903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.414111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.414234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.414391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.414519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.414691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.414872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.414997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.415024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.415181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.415204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.415383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.415411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.415538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.415566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.415731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.415754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.415886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.415928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.416072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.416115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.416294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.416324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.416484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.416509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.416646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.416674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.416798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.416822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.416979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.417018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.417172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.417201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.417378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.417402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.417516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.417556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.417676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.417704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.417875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.417897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.418046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.418069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.418227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.418255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.418389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.418428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.418564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.418588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.418752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.418780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.418948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.939 [2024-07-25 09:41:24.418970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.939 qpair failed and we were unable to recover it. 00:26:51.939 [2024-07-25 09:41:24.419100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.419142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.419281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.419309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.419472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.419497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.419613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.419652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.419762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.419790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.419928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.419952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.420081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.420104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.420241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.420268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.420428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.420453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.420604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.420643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.420766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.420793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.420894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.420917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.421966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.421988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.422128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.422269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.422443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.422692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.422904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.422998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.423025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.423149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.423172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.423313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.423336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.423520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.423563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.423707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.423732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.423890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.423930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.424055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.424083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.424218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.424241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.424332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.424365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.424507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.424536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.424660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.424698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.424831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.424855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.425025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.940 [2024-07-25 09:41:24.425054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.940 qpair failed and we were unable to recover it. 00:26:51.940 [2024-07-25 09:41:24.425189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.425213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.425347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.425376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.425509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.425537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.425658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.425699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.425824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.425985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.426014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.426166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.426191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.426367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.426395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.426523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.426551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.426714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.426738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.426904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.426932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.427035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.427063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.427227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.427254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.427420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.427445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.427560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.427583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.427706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.427729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.427899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.427927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.428061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.428090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.428223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.428247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.428392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.428418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.428561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.428589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.428712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.428750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.428880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.428905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.429043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.429072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.429223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.429246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.429363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.429405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.429567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.429595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.429700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.429739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.429879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.429903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.430041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.430070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.430197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.430227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.430368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.430394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.941 qpair failed and we were unable to recover it. 00:26:51.941 [2024-07-25 09:41:24.430546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.941 [2024-07-25 09:41:24.430574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.430704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.430728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.430856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.430880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.431043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.431072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.431208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.431231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.431362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.431387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.431523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.431551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.431709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.431732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.431868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.431909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.432041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.432070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.432201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.432243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.432408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.432433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.432582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.432606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.432752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.432775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.432931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.432973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.433131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.433312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.433446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.433598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.433732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.433880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.433996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.434153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.434276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.434430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.434592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.434775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.434936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.434964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.435113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.435136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.435292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.435334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.435474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.435628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.435667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.435781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.435805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.435931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.435959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.436082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.436106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.436260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.436285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.436444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.436488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.436646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.436670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.942 qpair failed and we were unable to recover it. 00:26:51.942 [2024-07-25 09:41:24.436807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.942 [2024-07-25 09:41:24.436847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.436973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.437001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.437157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.437180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.437303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.437327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.437452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.437482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.437654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.437678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.437844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.437871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.438028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.438203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.438402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.438523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.438689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.438861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.438982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.439010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.439117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.439140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.439298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.439322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.439487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.439515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.439640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.439678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.439841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.439882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.440009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.440036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.440192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.440214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.440394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.440433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.440596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.440624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.440770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.440793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.440952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.440979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.441098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.441125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.441280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.441303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.441468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.441492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.441607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.441647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.441772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.441810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.441969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.442136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.442291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.442461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.442604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.442756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.442941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.442964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.443086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.443113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.943 qpair failed and we were unable to recover it. 00:26:51.943 [2024-07-25 09:41:24.443268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.943 [2024-07-25 09:41:24.443292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.443460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.443489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.443651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.443678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.443799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.443841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.443976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.444159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.444319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.444508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.444670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.444817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.444974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.444997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.445137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.445164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.445293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.445316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.445457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.445481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.445600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.445624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.445770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.445807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.445961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.446124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.446310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.446448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.446564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.446709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.446869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.446909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.447027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.447054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.447174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.447197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.447327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.447351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.447504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.447532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.447691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.447714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.447834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.447857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.448019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.448046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.448178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.448204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.448368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.448411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.448531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.448558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.448665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.448689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.448853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.448876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.449009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.449037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.449187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.449210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.449344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.449374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.944 [2024-07-25 09:41:24.449511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.944 [2024-07-25 09:41:24.449538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.944 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.449640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.449664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.449845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.449882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.450010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.450037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.450190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.450213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.450392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.450421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.450597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.450640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.450754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.450779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.450914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.450938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.451098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.451126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.451256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.451298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.451429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.451455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.451615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.451654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.451822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.451846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.452003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.452045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.452171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.452198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.452331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.452378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.452527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.452551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.452715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.452743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.452866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.452890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.453027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.453051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.453226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.453256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.453427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.453452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.453568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.453608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.453761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.453789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.453913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.453937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.454071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.454095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.454256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.454285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.454435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.454460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.454583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.454607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.454772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.454799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.454925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.454948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.455071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.455095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.455232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.455262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.455422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.455447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.455594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.455619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.455788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.455836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.455999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.456022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.945 [2024-07-25 09:41:24.456134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.945 [2024-07-25 09:41:24.456158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.945 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.456305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.456333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.456492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.456517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.456654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.456678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.456812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.456840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.456990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.457148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.457322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.457498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.457655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.457789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.457924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.457948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.458077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.458100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.458234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.458262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.458430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.458455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.458611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.458639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.458790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.458818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.458989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.459114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.459303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.459465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.459603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.459769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.459928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.459951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.460112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.460154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.460245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.460272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.460402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.460441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.460588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.460630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.460747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.460775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.460913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.460936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.461067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.461091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.461230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.461258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.946 [2024-07-25 09:41:24.461382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.946 [2024-07-25 09:41:24.461422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.946 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.461547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.461571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.461716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.461745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.461871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.461910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.462082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.462111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.462263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.462291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.462444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.462469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.462593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.462617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.462727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.462754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.462887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.462911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.463129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.463280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.463450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.463595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.463724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.463857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.463983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.464010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.464145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.464184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.464316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.464362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.464476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.464500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.464656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.464685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.464813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.464850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.464980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.465142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.465286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.465446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.465580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.465752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.465922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.465950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.466101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.466130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.466266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.466290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.466452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.947 [2024-07-25 09:41:24.466491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.947 qpair failed and we were unable to recover it. 00:26:51.947 [2024-07-25 09:41:24.466623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.466651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.466811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.466834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.466929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.466953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.467090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.467118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.467270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.467294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.467420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.467445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.467587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.467615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.467734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.467757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.467893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.467917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.468049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.468077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.468192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.468216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.468380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.468420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.468571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.468598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.468720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.468758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.468873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.468896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.469023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.469051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.469199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.469223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.469376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.469419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.469520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.469548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.469709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.469733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.469911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.469939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.470101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.470129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.470245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.470285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.470435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.470461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.470570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.470599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.470728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.470767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.470880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.470904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.471039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.471079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.471217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.471241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.471397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.471440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.471565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.471593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.471755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.471778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.471905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.472096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.472124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.948 [2024-07-25 09:41:24.472249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.948 [2024-07-25 09:41:24.472272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.948 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.472432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.472473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.472591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.472618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.472789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.472812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.472940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.472980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.473131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.473159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.473279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.473303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.473464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.473489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.473599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.473624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.473773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.473810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.473942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.473981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.474135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.474163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.474282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.474305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.474481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.474506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.474656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.474685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.474851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.474874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.474993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.475032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.475185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.475213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.475385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.475410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.475522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.475562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.475679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.475708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.475858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.475895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.476059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.476086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.476211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.476238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.476392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.476416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.476539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.476563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.476736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.476764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.476894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.476932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.477088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.477131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.477232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.477260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.477419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.477447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.477618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.477646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.477797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.477825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.477950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.477973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.478085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.478109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.949 [2024-07-25 09:41:24.478237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.949 [2024-07-25 09:41:24.478264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.949 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.478430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.478454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.478621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.478659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.478825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.478853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.478978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.479016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.479142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.479166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.479327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.479363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.479530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.479554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.479693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.479716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.479888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.479917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.480044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.480081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.480174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.480197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.480313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.480340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.480501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.480525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.480617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.480654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.480816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.480843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.481019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.481040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.481164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.481204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.481353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.481387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.481522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.481544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.481698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.481735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.481873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.481900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.482030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.482053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.482195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.482234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.482392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.482417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.482558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.482581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.482721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.482744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.482881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.482907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.483059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.483083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.483190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.483228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.483375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.483401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.483544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.950 [2024-07-25 09:41:24.483569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.950 qpair failed and we were unable to recover it. 00:26:51.950 [2024-07-25 09:41:24.483686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.483711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.483830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.483858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.483992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.484163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.484326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.484469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.484584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.484784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.484945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.484970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.485092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.485132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.485255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.485295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.485391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.485417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.485560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.485585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.485738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.485763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.485880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.485905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.486968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.486992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.487126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.487151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.487264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.487290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.487443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.487467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.487607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.487632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.487744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.487769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.487916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.487941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.951 qpair failed and we were unable to recover it. 00:26:51.951 [2024-07-25 09:41:24.488090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.951 [2024-07-25 09:41:24.488115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.488216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.488244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.488372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.488416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.488526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.488551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.488686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.488711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.488856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.488881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.489939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.489964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.490109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.490134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.490278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.490302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.490475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.490501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.490627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.490652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.490777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.490802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.490928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.490952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.491068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.491241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.491386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.491558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.491693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.491836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.491989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.492014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.492169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.492209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.492328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.492353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.492510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.492535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.492638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.492662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.492808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.492833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.952 [2024-07-25 09:41:24.493016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.952 [2024-07-25 09:41:24.493041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.952 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.493159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.493198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.493310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.493335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.493473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.493499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.493643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.493668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.493770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.493798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.493964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.493989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.494120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.494145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.494292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.494317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.494416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.494442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.494597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.494622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.494733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.494762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.494910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.494935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.495105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.495129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.495257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.495282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.495426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.495453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.495569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.495594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.495704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.495729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.495914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.495939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.496059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.496084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.496221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.496246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.496422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.496448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.496569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.496594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.496714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.496739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.496897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.496921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.497053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.497195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.497344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.497555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.497720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.497877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.497999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.498141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.498282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.498468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.498611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.498752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.953 [2024-07-25 09:41:24.498893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.953 [2024-07-25 09:41:24.498918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.953 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.499036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.499061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.499212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.499240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.499407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.499433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.499550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.499575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.499699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.499724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.499853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.499877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.500927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.500952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.501096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.501126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.501275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.501301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.501449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.501475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.501622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.501647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.501760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.501785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.501873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.501898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.502060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.502083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.502216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.502241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.502386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.502412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.502523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.502562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.502683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.502722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.502819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.502847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.503009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.503034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.503174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.503197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.503372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.954 [2024-07-25 09:41:24.503398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.954 qpair failed and we were unable to recover it. 00:26:51.954 [2024-07-25 09:41:24.503545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.503571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.503693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.503718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.503836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.503861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.503972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.503998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.504936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.504961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.505057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.505082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.505234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.505262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.505425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.505451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.505549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.505574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.505736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.505760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.505885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.505910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.506024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.506049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.506183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.506224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.506354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.506387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.506510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.506535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.506663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.506688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.506820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.506844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.507873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.507898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.508958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.508983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.509126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.509151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.955 qpair failed and we were unable to recover it. 00:26:51.955 [2024-07-25 09:41:24.509296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.955 [2024-07-25 09:41:24.509336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.509484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.509509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.509631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.509656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.509801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.509826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.509962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.509986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.510109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.510134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.510258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.510283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.510434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.510459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.510603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.510643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.510729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.510754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.510873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.510898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.511905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.511930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.512056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.512081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.512204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.512232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.512384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.512412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.512585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.512610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.512758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.512796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.512943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.512967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.513135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.513160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.513246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.513271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.513421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.513445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.513565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.513594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.513708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.513734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.513889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.513913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.514066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.514090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.514220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.514245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.514368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.514394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.956 qpair failed and we were unable to recover it. 00:26:51.956 [2024-07-25 09:41:24.514507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.956 [2024-07-25 09:41:24.514532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.514654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.514694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.514836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.514860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.514997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.515166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.515315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.515493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.515608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.515758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.515903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.515928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.516122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.516271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.516416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.516614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.516734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.516879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.516992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.517114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.517272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.517454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.517625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.517816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.517950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.517975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.518138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.518162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.518337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.518383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.518501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.518527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.518672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.518697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.518820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.518845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.518982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.519129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.519247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.519405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.519551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.519698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.519841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.519887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.520008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.520033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.520165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.520191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.520311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.520336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.520466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.520491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.520637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.520676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.957 [2024-07-25 09:41:24.520773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.957 [2024-07-25 09:41:24.520798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.957 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.520949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.520974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.521062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.521087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.521234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.521274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.521387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.521413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.521503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.521528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.521675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.521700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.521875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.521899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.522064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.522089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.522235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.522259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.522417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.522443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.522588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.522613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.522731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.522756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.522875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.522900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.523043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.523066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.523205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.523230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.523366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.523409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.523564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.523588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.523749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.523777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.523921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.523946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.524064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.524088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.524239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.524263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.524426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.524452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.524575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.524601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.524735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.524760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.524922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.524947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.525068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.525093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.525211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.525236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.525380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.525422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.525541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.525566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.525699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.525724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.525846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.525886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.526936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.526961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.527075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.527100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.527238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.527265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.527428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.527454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.958 [2024-07-25 09:41:24.527567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.958 [2024-07-25 09:41:24.527592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.958 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.527721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.527746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.527878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.527903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.528966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.528991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.529129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.529298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.529437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.529586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.529765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.529907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.529985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.530025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.530156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.530181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.530330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.530361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.530485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.530510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.530665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.530689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.530836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.530861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.531008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.531047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.531188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.531213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.531362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.531388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.531537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.531561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.531681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.531706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.531858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.531883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.532001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.532026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.532180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.532219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.532368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.532393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.532509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.532534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.532665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.532693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.532861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.532889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.533944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.533969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.534086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.534111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.534233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.534258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.534403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.534429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.959 qpair failed and we were unable to recover it. 00:26:51.959 [2024-07-25 09:41:24.534545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.959 [2024-07-25 09:41:24.534570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.534680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.534705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.534854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.534894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.535924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.535949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.536099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.536124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.536291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.536319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.536467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.536493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.536617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.536642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.536759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.536784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.536974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.537035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.537190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.537227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.537379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.537419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.537521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.537548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.537674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.537719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.537834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.537877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.538039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.538191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.538377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.538547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.538691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.538837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.538973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.539019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.539185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.539217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.539338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.539386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.539494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.539525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.539654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.539697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.539846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.539890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.540019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.540045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.960 qpair failed and we were unable to recover it. 00:26:51.960 [2024-07-25 09:41:24.540184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.960 [2024-07-25 09:41:24.540209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.540326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.540352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.540508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.540534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.540687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.540712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.540805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.540830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.540992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.541171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.541294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.541455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.541615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.541764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.541944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.541972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.542128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.542156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.542289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.542318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.542440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.542466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.542565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.542591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.542683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.542726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.961 [2024-07-25 09:41:24.542855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.961 [2024-07-25 09:41:24.542882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.961 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.543893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.543918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.544086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.544276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.544435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.544557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.544719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.544893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.544999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.545157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.545308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.545455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.545583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.545686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.545838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.545863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.546006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.546034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.546128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.546156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.546279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.546306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.962 [2024-07-25 09:41:24.546465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.962 [2024-07-25 09:41:24.546521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.962 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.546684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.546711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.546846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.546890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.546993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.547021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.547185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.547216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.547367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.547395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.547519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.547546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.547677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.547702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.547845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.547871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.547992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.548019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.548176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.548201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.548350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.548383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.548491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.548516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.548647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.548672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.548786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.548811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.548991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.549022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.549170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.549195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.549347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.549397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.549496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.549542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.549663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.549706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.549836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.549878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.550935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.550963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.551086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.551114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.551260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.551287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.551426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.551451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.551553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.551578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.551724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.551764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.551890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.551917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.552050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.552075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.552204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.552232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.552327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.552354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.552491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.963 [2024-07-25 09:41:24.552516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.963 qpair failed and we were unable to recover it. 00:26:51.963 [2024-07-25 09:41:24.552646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.552671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.552781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.552806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.552951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.552976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.553139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.553167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.553266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.553294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.553390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.553432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.553525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.553550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.553674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.553698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.553860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.553888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.554009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.554037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.554155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.554183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.554307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.554334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.554502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.554527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.554659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.554687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.554813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.554841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.555970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.555994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.556133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.556162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.556322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.556349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.556465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.556490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.556586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.556610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.556751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.556779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.556909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.556952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.557048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.557075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.557201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.557229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.557386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.557424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.557543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.557580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.557757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.557799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.557934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.557963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.558088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.558116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.558214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.964 [2024-07-25 09:41:24.558241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.964 qpair failed and we were unable to recover it. 00:26:51.964 [2024-07-25 09:41:24.558406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.558432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.558530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.558554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.558673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.558713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.558816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.558845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.559018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.559068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.559197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.559225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.559371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.559426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.559569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.559607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.559753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.559797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.559933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.559975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.560109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.560157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.560323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.560373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.560471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.560496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.560637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.560666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.560838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.560861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.561003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.561048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.561166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.561189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.561309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.561334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.561483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.561521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.561635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.561687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.561862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.561887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.562942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.562966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.563069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.563096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.563229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.563257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.563428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.563466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.563621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.563662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.563810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.563851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.563979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.564007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.564138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.564161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.564323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.965 [2024-07-25 09:41:24.564347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.965 qpair failed and we were unable to recover it. 00:26:51.965 [2024-07-25 09:41:24.564462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.564487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.564607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.564631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.564743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.564767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.564896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.564919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.565969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.565993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.566155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.566179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.566294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.566319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.566477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.566515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.566676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.566701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.566790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.566814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.566946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.566970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.567099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.567128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.567255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.567283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.567451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.567489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.567596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.567621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.567756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.567796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.567918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.567945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.966 [2024-07-25 09:41:24.568950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.966 [2024-07-25 09:41:24.568978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.966 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.569141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.569169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.569308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.569338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.569483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.569508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.569625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.569663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.569830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.569858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.569944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.569971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.570132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.570159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.570285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.570313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.570448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.570472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.570592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.570616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.570755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.570796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.570897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.570924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.571054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.571252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.571426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.571568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.571710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.571874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.571997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.572039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.572165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.572193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.572340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.572373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.572511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.572534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.572663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.572686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.572819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.572846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.572972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.573013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.573147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.573175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.573298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.573326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.573436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.573460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.573606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.573644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.573786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.573814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.573980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.574008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.574129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.574156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.574279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.574306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.574448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.574472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.574587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.574611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.574768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.967 [2024-07-25 09:41:24.574795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.967 qpair failed and we were unable to recover it. 00:26:51.967 [2024-07-25 09:41:24.574920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.574963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.575079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.575106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.575223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.575251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.575399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.575437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.575564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.575602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.575784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.575813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.575955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.575984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.576150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.576178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.576281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.576310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.576473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.576498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.576654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.576677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.576832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.576860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.576986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.577014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.577181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.577209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.577338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.577376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.577504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.577528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.577670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.577693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.577860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.577887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.578928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.578956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.579081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.579109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.579222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.579249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.579410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.579435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.579552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.579576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.579719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.579760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.579877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.579904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.580024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.580063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.580192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.580220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.580340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.580373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.580535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.580558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.580678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.580717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.580850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.580878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.581030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.968 [2024-07-25 09:41:24.581058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.968 qpair failed and we were unable to recover it. 00:26:51.968 [2024-07-25 09:41:24.581174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.581201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.581330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.581374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.581507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.581531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.581675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.581714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.581841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.581869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.581995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.582035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.582150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.582177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.582272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.582299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.582466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.582509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.582610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.582652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.582809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.582852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.583022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.583064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.583207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.583235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.583378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.583403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.583524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.583566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.583712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.583754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.583894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.583921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.584097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.584149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.584280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.584308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.584464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.584489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.584587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.584615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.584742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.584769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.584926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.584953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.585104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.585131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.585222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.585249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.585378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.585419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.585498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.585523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.585642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.585683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.585816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.585844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.586022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.586064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.586208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.586232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.586407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.586436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.586603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.586626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.586739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.586763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.586895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.587073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.587112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.587212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.587236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.587412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.969 [2024-07-25 09:41:24.587440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.969 qpair failed and we were unable to recover it. 00:26:51.969 [2024-07-25 09:41:24.587561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.587589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.587717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.587744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.587896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.587924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.588084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.588128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.588270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.588293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.588390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.588415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.588576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.588619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.588775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.588816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.588954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.588982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.589099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.589127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.589249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.589277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.589435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.589459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.589618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.589645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.589794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.589822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.589943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.589971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.590139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.590181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.590349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.590380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.590536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.590561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.590688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.590726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.590859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.590882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.591019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.591044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.591200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.591238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.591393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.591417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.591546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.591570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.591702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.591727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.591904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.591927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.592060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.592085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.592244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.592283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.592412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.592437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.592590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.592631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.592785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.592831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.592932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.592974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.593123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.593147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.593262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.593298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.593419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.593445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.593537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.593562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.593707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.593735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.970 [2024-07-25 09:41:24.593849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.970 [2024-07-25 09:41:24.593877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.970 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.594010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.594037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.594183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.594212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.594381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.594406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.594542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.594584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.594737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.594779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.594909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.594937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.595105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.595129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.595257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.595282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.595406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.595430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.595589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.595613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.595748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.595788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.595912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.595939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.596057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.596084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.596266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.596317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.596500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.596526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.596632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.596661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.596779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.596807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.596955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.596982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.597109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.597133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.597237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.597261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.597376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.597414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.597549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.597573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.597718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.597742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.597867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.597891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.598026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.598050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.598172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.598214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.598368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.598416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.598565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.598607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.598753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.598781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.598928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.598955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.971 [2024-07-25 09:41:24.599101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.971 [2024-07-25 09:41:24.599144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.971 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.599250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.599274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.599410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.599434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.599594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.599618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.599786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.599808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.599968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.600125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.600162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.600282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.600307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.600442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.600466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.600598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.600640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.600784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.600808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.600948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.600996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.601147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.601171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.601301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.601325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.601474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.601499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.601636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.601674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.601838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.601861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.602968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.602995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.603140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.603163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.603334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.603363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.603466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.603491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.603646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.603669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.603838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.603880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.603988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.604171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.604371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.604509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.604683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.604809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.604958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.604981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.605104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.605127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.605263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.605288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.972 [2024-07-25 09:41:24.605482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.972 [2024-07-25 09:41:24.605520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.972 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.605618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.605658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.605779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.605804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.605940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.605964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.606101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.606125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.606227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.606252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.606403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.606537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.606560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.606663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.606687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.606830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.606860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.607919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.607946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.608070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.608097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.608223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.608246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.608366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.608390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.608550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.608577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.608702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.608745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.608876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.608919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.609032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.609074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.609237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.609276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.609431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.609475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.609642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.609684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.609849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.609880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.610049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.610092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.610239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.610264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.610411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.610446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.610603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.610631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.610725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.610752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.610900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.610928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.611079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.611110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.611244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.611272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.611435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.611473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.611644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.611672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.973 qpair failed and we were unable to recover it. 00:26:51.973 [2024-07-25 09:41:24.611786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.973 [2024-07-25 09:41:24.611811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.611938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.611981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.612099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.612142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.612287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.612312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.612437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.612462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.612589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.612630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.612721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.612745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.612867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.612892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.613035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.613214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.613338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.613524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.613694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.613867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.613992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.614019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.614176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.614203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.614346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.614379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.614529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.614554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.614712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.614754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.614913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.614956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.615903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.615928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.616028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.616053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.616200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.616226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.616379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.616406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.616561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.616603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.616731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.616772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.616918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.616960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.617078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.617104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.617226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.617251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.617398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.617424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.617513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.617538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.617637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.617663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.974 qpair failed and we were unable to recover it. 00:26:51.974 [2024-07-25 09:41:24.617810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.974 [2024-07-25 09:41:24.617835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.617964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.617989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.618083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.618109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.618234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.618277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.618416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.618442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.618532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.618557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.618678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.618703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:51.975 [2024-07-25 09:41:24.618823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.975 [2024-07-25 09:41:24.618848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:51.975 qpair failed and we were unable to recover it. 00:26:52.257 [2024-07-25 09:41:24.618968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.257 [2024-07-25 09:41:24.618993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.257 qpair failed and we were unable to recover it. 00:26:52.257 [2024-07-25 09:41:24.619080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.257 [2024-07-25 09:41:24.619105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.257 qpair failed and we were unable to recover it. 00:26:52.257 [2024-07-25 09:41:24.619228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.257 [2024-07-25 09:41:24.619253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.257 qpair failed and we were unable to recover it. 00:26:52.257 [2024-07-25 09:41:24.619341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.257 [2024-07-25 09:41:24.619372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.257 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.619500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.619526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.619620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.619646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.619762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.619787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.619907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.619932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.620047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.620073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.620210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.620248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.620377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.620404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.620531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.620556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.620701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.620740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.620894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.620920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.621092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.621234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.621383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.621503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.621652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.621831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.621986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.622014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.622147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.622176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.622343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.622378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.622497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.622523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.622675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.622717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.622849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.622893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.623023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.623065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.623186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.623228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.623371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.623397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.623547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.623572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.623695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.623723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.623845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.623873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.624025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.624053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.624191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.624221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.624396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.624422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.624553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.624595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.624723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.624766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.624938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.624981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.625082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.625110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.625270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.625310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.625432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.258 [2024-07-25 09:41:24.625458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.258 qpair failed and we were unable to recover it. 00:26:52.258 [2024-07-25 09:41:24.625581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.625606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.625714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.625739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.625908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.625932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.626079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.626104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.626223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.626248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.626393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.626435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.626519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.626545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.626680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.626704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.626860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.626883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.627006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.627043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.627173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.627196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.627331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.627376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.627478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.627503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.627657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.627682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.627840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.627862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.628899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.628940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.629099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.629124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.629295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.629319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.629484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.629509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.629680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.629703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.629874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.629897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.630014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.630053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.630207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.630230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.630370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.630396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.630517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.630542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.630684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.630707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.630857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.630880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.631025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.631063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.631171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.631195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.631362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.259 [2024-07-25 09:41:24.631388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.259 qpair failed and we were unable to recover it. 00:26:52.259 [2024-07-25 09:41:24.631534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.631559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.631655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.631679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.631785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.631809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.631950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.631973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.632075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.632099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.632219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.632258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.632445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.632482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.632610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.632649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.632773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.632800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.632937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.632984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.633102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.633129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.633295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.633335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.633514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.633557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.633688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.633718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.633818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.633846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.634021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.634048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.634167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.634295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.634322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.634484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.634522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.634643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.634687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.634830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.634858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.635873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.635896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.636965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.636989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.260 [2024-07-25 09:41:24.637107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.260 [2024-07-25 09:41:24.637132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.260 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.637248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.637272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.637392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.637417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.637555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.637579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.637743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.637767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.637977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.638000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.638166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.638215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.638332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.638361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.638486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.638529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.638654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.638707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.638834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.638862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.639017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.639041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.639168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.639193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.639320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.639364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.639548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.639574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.639797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.639829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.639994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.640017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.640197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.640220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.640350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.640395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.640616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.640641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.640784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.640812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.640954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.640981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.641193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.641220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.641317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.641363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.641567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.641591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.641759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.641781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.641983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.642010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.642178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.642205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.642386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.642411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.642516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.642540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.642661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.642689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.642811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.642856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.642989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.643167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.643354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.643486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.643598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.643758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.261 [2024-07-25 09:41:24.643950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-07-25 09:41:24.643990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.261 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.644152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.644180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.644346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.644389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.644506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.644530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.644641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.644674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.644842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.644865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.645018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.645040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.645220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.645254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.645416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.645440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.645654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.645688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.645820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.645847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.645978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.646020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.646180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.646208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.646327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.646364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.646529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.646552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.646740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.646767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.646919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.646947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.647083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.647126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.647262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.647289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.647426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.647451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.647611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.647663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.647758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.647798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.647954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.647981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.648139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.648167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.648374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.648435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.648565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.648589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.648760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.648783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.648996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.649024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.649154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.649182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.649289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.649316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.649453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.262 [2024-07-25 09:41:24.649477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.262 qpair failed and we were unable to recover it. 00:26:52.262 [2024-07-25 09:41:24.649626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.649668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.649776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.649814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.650008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.650035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.650175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.650203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.650378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.650405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.650543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.650567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.650756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.650784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.650920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.650968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.651110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.651138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.651289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.651317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.651482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.651506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.651661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.651688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.651901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.651929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.652070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.652113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.652239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.652412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.652437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.652544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.652568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.652706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.652729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.652867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.652894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.653111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.653147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.653276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.653303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.653480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.653505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.653679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.653702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.653903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.653930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.654039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.654066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.654242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.654269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.654451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.654484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.654624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.654647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.654793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.654830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.655001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.655028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.655157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.655188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.655346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.655390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.655589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.655613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.655766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.655793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.655931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.655968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.656110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.656150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.263 [2024-07-25 09:41:24.656336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.263 [2024-07-25 09:41:24.656372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.263 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.656521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.656544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.656676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.656699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.656880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.656907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.657068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.657091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.657255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.657283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.657450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.657479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.657662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.657696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.657839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.657867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.658052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.658089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.658256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.658279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.658416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.658456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.658594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.658622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.658797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.658819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.658985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.659013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.659147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.659174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.659335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.659364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.659551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.659578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.659705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.659732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.659878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.659914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.660037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.660075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.660192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.660224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.660365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.660389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.660599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.660622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.660748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.660775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.660990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.661134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.661255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.661410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.661564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.661730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.661888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.661925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.662045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.662068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.662188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.662215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.662431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.662456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.662615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.662656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.662779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.662807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.264 [2024-07-25 09:41:24.662926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.264 [2024-07-25 09:41:24.662949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.264 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.663087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.663110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.663237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.663264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.663425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.663449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.663616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.663657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.663749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.663776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.663903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.663926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.664126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.664298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.664326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.664429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.664453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.664647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.664670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.664805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.664832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.665025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.665048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.665157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.665198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.665426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.665449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.665534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.665557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.665717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.665755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.665868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.665895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.666966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.666989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.667116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.667142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.667272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.667299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.667422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.667461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.667584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.667607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.667787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.667814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.667987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.668170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.668333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.668467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.668628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.668792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.668950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.668973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.669087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.669110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.669372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.265 [2024-07-25 09:41:24.669400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.265 qpair failed and we were unable to recover it. 00:26:52.265 [2024-07-25 09:41:24.669527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.669551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.669657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.669680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.669868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.669895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.670971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.670993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.671110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.671133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.671293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.671320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.671483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.671507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.671634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.671681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.671781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.671808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.671907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.671930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.672976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.672999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.673163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.673190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.673303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.673326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.673485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.673509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.673668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.673696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.673831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.673868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.673991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.674014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.674175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.674202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.674369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.674397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.674562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.674586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.674712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.674739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.674845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.674867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.675027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.675050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.675153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.675180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.675327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.675354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.266 [2024-07-25 09:41:24.675468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.266 [2024-07-25 09:41:24.675492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.266 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.675641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.675668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.675821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.675843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.675958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.675981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.676124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.676151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.676277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.676443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.676468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.676583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.676607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.676741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.676764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.676917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.676940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.677965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.677992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.678138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.678165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.678281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.678304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.678451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.678476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.678594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.678618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.678743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.678782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.678943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.678972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.679094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.679131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.679267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.679290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.679434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.679458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.679610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.679634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.679767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.679807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.679953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.679981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.680082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.680106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.680244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.680267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.680420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.267 [2024-07-25 09:41:24.680444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.267 qpair failed and we were unable to recover it. 00:26:52.267 [2024-07-25 09:41:24.680581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.680604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.680738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.680762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.680865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.680893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.681895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.681918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.682065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.682089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.682244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.682271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.682404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.682441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.682539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.682562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.682736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.682763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.682887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.682910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.683914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.683936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.684050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.684081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.684235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.684263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.684421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.684445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.684642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.684681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.684796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.684819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.684912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.684935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.685061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.685088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.685214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.685254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.685414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.685439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.685552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.685576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.685728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.685766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.685928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.685955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.686077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.686104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.686250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.686273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.686382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.268 [2024-07-25 09:41:24.686406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.268 qpair failed and we were unable to recover it. 00:26:52.268 [2024-07-25 09:41:24.686561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.686588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.686754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.686777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.686907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.686948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.687094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.687121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.687272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.687309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.687441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.687465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.687637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.687665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.687785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.687822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.687947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.687970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.688131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.688158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.688318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.688341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.688513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.688541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.688629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.688657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.688789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.688812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.688930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.688953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.689070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.689102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.689255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.689292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.689384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.689423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.689549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.689576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.689707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.689744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.689872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.689896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.690967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.690995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.691113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.691152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.691261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.691284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.691418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.691443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.691565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.691588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.691776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.691803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.691954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.691981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.692101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.692124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.692303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.692345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.692525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.692552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.269 [2024-07-25 09:41:24.692676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.269 [2024-07-25 09:41:24.692713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.269 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.692875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.692915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.693025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.693052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.693163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.693185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.693323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.693346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.693537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.693675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.693712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.693849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.693887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.694048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.694082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.694202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.694239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.694387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.694411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.694543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.694570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.694689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.694712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.694871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.694910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.695016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.695043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.695147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.695170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.695335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.695363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.695521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.695548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.695706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.695731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.695897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.695934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.696927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.696966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.697150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.697187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.697323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.697393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.697520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.697548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.697667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.697690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.697823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.697846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.697942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.697969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.698071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.698095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.698236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.698259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.698417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.698445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.698658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.698682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.698813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.698855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.270 [2024-07-25 09:41:24.698960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.270 [2024-07-25 09:41:24.698986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.270 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.699209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.699231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.699367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.699396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.699525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.699552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.699693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.699716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.699859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.699897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.699999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.700026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.700186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.700209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.700367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.700413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.700542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.700569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.700681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.700705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.700819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.700842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.701036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.701228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.701410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.701527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.701675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.701870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.701990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.702017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.702223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.702245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.702396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.702424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.702530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.702558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.702699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.702737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.702841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.702864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.702990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.703018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.703192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.703229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.703419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.703448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.703573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.703601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.703705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.703729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.703887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.703910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.704053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.704081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.704258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.704280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.704463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.704492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.704578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.704606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.704720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.704743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.704997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.705025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.705117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.705144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.271 [2024-07-25 09:41:24.705242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.271 [2024-07-25 09:41:24.705265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.271 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.705408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.705447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.705545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.705572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.705708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.705732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.705863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.705887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.706955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.706982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.707196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.707223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.707340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.707376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.707494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.707522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.707671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.707709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.707890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.707918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.708892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.708920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.709087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.709125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.709331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.709374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.709501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.709529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.709649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.709673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.709864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.709888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.710073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.710108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.710248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.710275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.710422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.710448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.710586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.710613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.272 [2024-07-25 09:41:24.710743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.272 [2024-07-25 09:41:24.710783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.272 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.710884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.710920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.711153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.711185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.711353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.711388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.711525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.711550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.711704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.711733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.711855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.711898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.712093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.712219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.712369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.712527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.712688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.712832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.712980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.713004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.713142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.713170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.713313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.713340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.713477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.713647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.713675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.713891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.713914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.714909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.714932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.715098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.715126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.715292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.715314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.715432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.715456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.715595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.715622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.715751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.715789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.715900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.715923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.716068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.716096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.716213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.716236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.716412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.716437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.716597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.716625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.716789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.716811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.717029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.717057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.273 qpair failed and we were unable to recover it. 00:26:52.273 [2024-07-25 09:41:24.717191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.273 [2024-07-25 09:41:24.717219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.717381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.717407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.717500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.717526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.717666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.717689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.717882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.717904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.718083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.718257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.718384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.718502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.718640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.718781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.718985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.719022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.719134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.719162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.719292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.719315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.719451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.719475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.719649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.719688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.719850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.719873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.720077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.720104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.720267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.720295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.720428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.720452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.720578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.720602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.720715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.720738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.720845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.720868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.721030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.721053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.721262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.721290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.721424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.721448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.721641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.721669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.721762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.721790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.721907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.721931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.722155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.722183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.722299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.722326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.722466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.722491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.722581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.722605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.722736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.722763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.722920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.722943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.723111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.723138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.723232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.723264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.723368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.723392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.274 [2024-07-25 09:41:24.723514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.274 [2024-07-25 09:41:24.723538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.274 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.723646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.723674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.723821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.723858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.724029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.724057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.724179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.724207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.724394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.724435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.724587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.724611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.724719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.724747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.724859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.724882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.725047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.725071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.725236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.725263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.725397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.725421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.725552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.725577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.725728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.725756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.725918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.725940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.726067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.726106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.726217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.726244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.726381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.726406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.726558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.726583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.726773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.726801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.726919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.726942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.727166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.727193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.727274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.727302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.727432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.727457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.727596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.727621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.727775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.727803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.727983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.728006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.728133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.728175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.728295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.728323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.728490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.728514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.728657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.728684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.728857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.728884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.729000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.729037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.729160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.729183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.729381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.729422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.729541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.729565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.729770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.729797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.729906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.729934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.275 [2024-07-25 09:41:24.730095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.275 [2024-07-25 09:41:24.730118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.275 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.730288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.730320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.730454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.730479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.730580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.730603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.730729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.730753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.730879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.730902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.731053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.731191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.731385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.731554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.731686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.731832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.731979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.732154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.732310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.732478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.732614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.732779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.732954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.732977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.733096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.733120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.733278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.733305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.733417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.733441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.733590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.733615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.733747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.733774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.733871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.733895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.734028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.734052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.734216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.734243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.734367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.734391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.734519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.734543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.734663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.734691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.734852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.734875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.735039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.735068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.276 [2024-07-25 09:41:24.735192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.276 [2024-07-25 09:41:24.735220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.276 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.735378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.735402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.735531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.735556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.735720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.735748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.735844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.735868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.736024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.736048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.736172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.736200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.736320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.736367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.736499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.736523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.736667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.736861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.736885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.737043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.737193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.737330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.737539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.737714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.737876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.737996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.738141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.738294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.738489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.738594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.738766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.738925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.738948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.739090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.739118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.739250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.739273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.739392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.739417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.739547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.739575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.739710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.739747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.739870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.739893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.740019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.740046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.740166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.740206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.740363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.740391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.740505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.740529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.740681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.740704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.740869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.740896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.741012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.741039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.741167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.741193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.277 [2024-07-25 09:41:24.741331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.277 [2024-07-25 09:41:24.741354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.277 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.741497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.741524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.741618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.741642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.741769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.741792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.741921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.741949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.742099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.742233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.742425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.742558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.742694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.742866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.742987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.743010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.743168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.743191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.743300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.743328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.743471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.743494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.743656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.743679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.743815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.743842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.743991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.744150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.744307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.744486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.744635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.744800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.744976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.744999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.745137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.745176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.745298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.745443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.745466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.745595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.745618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.745757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.745784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.745903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.745926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.746092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.746131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.746250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.746277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.746424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.746448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.746580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.746604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.746736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.746764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.746858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.746881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.747015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.747038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.747177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.747204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.278 [2024-07-25 09:41:24.747319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.278 [2024-07-25 09:41:24.747364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.278 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.747492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.747516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.747624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.747681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.747816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.747841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.747998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.748192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.748314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.748453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.748623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.748789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.748936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.748959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.749069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.749092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.749229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.749256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.749408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.749432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.749546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.749570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.749723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.749760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.749900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.749927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.750884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.750911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.751069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.751091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.751257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.751284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.751416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.751455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.751606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.751632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.751780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.751807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.751951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.751985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.752107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.752131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.752259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.752283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.752422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.752447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.752602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.752625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.752748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.752790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.752911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.752938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.753033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.753056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.753184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.753208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.279 qpair failed and we were unable to recover it. 00:26:52.279 [2024-07-25 09:41:24.753296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.279 [2024-07-25 09:41:24.753323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.753459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.753483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.753559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.753582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.753692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.753719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.753847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.753870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.753998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.754164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.754344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.754492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.754610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.754779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.754953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.754992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.755949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.755981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.756134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.756157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.756326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.756353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.756471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.756508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.756653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.756689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.756823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.756850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.756944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.756972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.757082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.757105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.757235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.757259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.757433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.757457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.757570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.757593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.757720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.757743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.757847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.757875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.758020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.758043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.758178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.758217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.758336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.758373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.758531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.758554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.758725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.758752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.758897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.758924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.759018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.759041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.280 qpair failed and we were unable to recover it. 00:26:52.280 [2024-07-25 09:41:24.759192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.280 [2024-07-25 09:41:24.759216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.759385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.759414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.759532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.759555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.759686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.759709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.759813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.759840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.759962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.759985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.760120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.760280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.760421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.760538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.760685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.760828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.760977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.761131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.761286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.761446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.761610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.761763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.761894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.761917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.762099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.762231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.762379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.762526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.762691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.762867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.762998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.763910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.281 [2024-07-25 09:41:24.763998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.281 [2024-07-25 09:41:24.764022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.281 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.764099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.764123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.764280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.764327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.764489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.764539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.764729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.764765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.764909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.764932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.765071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.765237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.765391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.765522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.765681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.765841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.765981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.766971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.766996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.767137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.767162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.767342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.767372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.767501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.767525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.767641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.767684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.767851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.767888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.768916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.768941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.282 [2024-07-25 09:41:24.769921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.282 [2024-07-25 09:41:24.769945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.282 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.770057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.770081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.770167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.770192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.770349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.770403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.770556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.770595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.770744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.770773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.770935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.770984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.771123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.771166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.771291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.771322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.771459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.771486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.771656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.771695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.771839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.771875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.772044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.772102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.772273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.772300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.772466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.772493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.772654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.772851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.772899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.773032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.773067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.773196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.773221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.773343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.773379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.773534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.773559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.773786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.773814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.773981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.774156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.774304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.774449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.774636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.774754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.774921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.774945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.775067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.775092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.775223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.775247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.775386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.775414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.775582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.775614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.775730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.775780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.775947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.775973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.776156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.776203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.776342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.776375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.776540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.283 [2024-07-25 09:41:24.776584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.283 qpair failed and we were unable to recover it. 00:26:52.283 [2024-07-25 09:41:24.776748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.776790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.776876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.776903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.777056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.777099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.777286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.777312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.777454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.777498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.777695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.777738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.777888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.777932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.778056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.778096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.778262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.778296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.778467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.778511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.778626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.778671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.778801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.778842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.778938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.778981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.779132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.779158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.779280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.779310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.779484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.779527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.779672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.779703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.779818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.779844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.779994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.780026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.780204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.780229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.780420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.780459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.780633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.780671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.780802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.780828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.781068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.781220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.781249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.781372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.781415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.781551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.781579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.781705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.781734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.781881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.781911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.782027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.782055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.782174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.782202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.782411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.782438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.782557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.782582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.782704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.782729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.782811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.782847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.783011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.783039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.783194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.783222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.284 [2024-07-25 09:41:24.783369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.284 [2024-07-25 09:41:24.783398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.284 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.783612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.783649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.783801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.783825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.783935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.783962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.784090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.784131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.784243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.784271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.784410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.784436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.784522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.784548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.784702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.784728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.784917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.784951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.785113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.785141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.785298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.785327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.785445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.785471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.785553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.785578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.785695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.785728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.785921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.785951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.786089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.786118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.786213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.786241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.786371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.786415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.786561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.786586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.786812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.786837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.786986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.787010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.787193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.787221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.787389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.787430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.787529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.787555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.787765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.787790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.787933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.787958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.788077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.788101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.788235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.788263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.788454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.788480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.788593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.788618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.788830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.788854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.788996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.789030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.789143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.789169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.789305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.789333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.789444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.789470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.789556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.285 [2024-07-25 09:41:24.789581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.285 qpair failed and we were unable to recover it. 00:26:52.285 [2024-07-25 09:41:24.789720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.789764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.789916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.789941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.790069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.790094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.790221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.790250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.790354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.790403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.790487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.790512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.790612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.790652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.790853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.790885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.791037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.791061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.791256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.791287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.791426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.791453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.791573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.791598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.791717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.791742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.791892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.791917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.792100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.792273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.792429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.792572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.792723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.792878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.792999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.793835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.793979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.794025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.794155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.794198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.794321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.794351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.794512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.794543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.794651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.794680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.794874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.286 [2024-07-25 09:41:24.794903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.286 qpair failed and we were unable to recover it. 00:26:52.286 [2024-07-25 09:41:24.795074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.795105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.795237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.795264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.795387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.795428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.795551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.795578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.795707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.795744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.795902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.795927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.796030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.796058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.796163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.796196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.796410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.796436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.796545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.796569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.796761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.796786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.796933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.796959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.797139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.797167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.797327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.797360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.797495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.797521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.797649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.797674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.797811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.797836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.797992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.798126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.798243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.798410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.798517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.798661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.798797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.798823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.799002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.799030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.799161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.799190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.799323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.799351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.799521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.799547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.799721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.799768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.799923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.799951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.800099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.800127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.800310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.800338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.800481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.800507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.800648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.800673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.800788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.800812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.800927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.800952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.801138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.801166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.287 [2024-07-25 09:41:24.801323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.287 [2024-07-25 09:41:24.801351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.287 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.801503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.801528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.801646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.801672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.801781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.801806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.801898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.801923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.802041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.802072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.802165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.802193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.802286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.802314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.802496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.802542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.802679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.802704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.802895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.802954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.803118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.803166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.803337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.803370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.803525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.803557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.803720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.803763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.803925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.803976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.804089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.804131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.804285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.804312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.804468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.804493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.804608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.804633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.804757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.804785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.804900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.804928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.805035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.805063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.805171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.805199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.805400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.805433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.805563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.805606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.805717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.805745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.805922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.805966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.806151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.806182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.806329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.806362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.806490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.806514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.806704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.806729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.806875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.806903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.807049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.807077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.807214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.807242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.807433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.807459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.807644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.807670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.807802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.288 [2024-07-25 09:41:24.807827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.288 qpair failed and we were unable to recover it. 00:26:52.288 [2024-07-25 09:41:24.807982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.808142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.808310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.808442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.808618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.808756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.808880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.808907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.809029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.809057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.809207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.809234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.809384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.809423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.809591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.809618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.809749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.809792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.809967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.810001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.810123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.810152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.810271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.810299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.810458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.810500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.810635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.810660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.810811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.810836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.810972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.811000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.811189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.811217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.811348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.811383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.811509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.811533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.811678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.811718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.811929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.811957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.812112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.812140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.812353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.812410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.812519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.812545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.812696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.812721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.812845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.812869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.813953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.813983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.814133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.814160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.289 [2024-07-25 09:41:24.814322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.289 [2024-07-25 09:41:24.814350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.289 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.814462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.814487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.814616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.814645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.814845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.814898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.815052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.815101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.815237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.815276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.815457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.815486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.815624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.815667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.815813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.815843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.815991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.816179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.816309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.816472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.816680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.816822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.816972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.816998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.817131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.817157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.817289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.817334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.817491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.817518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.817680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.817706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.817877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.817901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.818057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.818084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.818173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.818208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.818402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.818451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.818609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.818635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.818815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.818849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.818991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.819019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.819119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.819147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.819295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.819323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.819479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.819507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.819660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.819703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.819853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.290 [2024-07-25 09:41:24.819898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.290 qpair failed and we were unable to recover it. 00:26:52.290 [2024-07-25 09:41:24.820027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.820070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.820198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.820224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.820370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.820398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.820624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.820670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.820825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.820868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.821058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.821106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.821237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.821264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.821413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.821443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.821627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.821671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.821801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.821856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.821977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.822025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.822120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.822146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.822313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.822352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.822493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.822520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.822724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.822750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.822888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.822916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.823092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.823124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.823282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.823310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.823470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.823496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.823649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.823674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.823828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.823853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.824064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.824092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.824230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.824258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.824381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.824424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.824538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.824563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.824707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.824731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.824880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.824905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.825936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.825963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.826075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.826103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.826257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.826285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.826470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.826496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.291 [2024-07-25 09:41:24.826650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.291 [2024-07-25 09:41:24.826675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.291 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.826790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.826815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.826943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.826971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.827219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.827251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.827412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.827438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.827568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.827593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.827740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.827766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.827891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.827917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.828060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.828088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.828266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.828293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.828446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.828472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.828588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.828613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.828756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.828781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.828903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.828932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.829043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.829071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.829226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.829254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.829430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.829456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.829617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.829646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.829798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.829823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.829963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.829987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.830125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.830153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.830345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.830383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.830541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.830566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.830697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.830735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.830918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.831121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.831146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.831336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.831376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.831559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.831591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.831723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.831751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.831950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.831975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.832108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.832133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.832299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.832332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.832456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.832481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.832596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.832628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.832847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.832893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.833018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.833043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.833167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.833192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.833347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.292 [2024-07-25 09:41:24.833393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.292 qpair failed and we were unable to recover it. 00:26:52.292 [2024-07-25 09:41:24.833538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.833563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.833749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.833774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.833911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.833936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.834110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.834135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.834317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.834341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.834499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.834524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.834627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.834652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.834770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.834795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.834944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.834968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.835109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.835149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.835255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.835290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.835443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.835469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.835599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.835624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.835778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.835803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.835919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.835945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.836135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.836339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.836463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.836571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.836723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.836846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.836985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.837185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.837327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.837505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.837651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.837788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.837946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.837969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.838168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.838202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.838329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.838354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.838503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.838528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.838688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.838713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.838869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.838894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.839065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.839089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.839194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.839218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.839349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.839387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.839526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.839551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.293 [2024-07-25 09:41:24.839675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.293 [2024-07-25 09:41:24.839699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.293 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.839878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.839903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.840003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.840027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.840170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.840195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.840331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.840363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.840518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.840543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.840679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.840704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.840910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.840949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.841948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.841972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.842161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.842186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.842276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.842302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.842428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.842469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.842589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.842617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.842739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.842781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.842958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.842984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.843967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.843991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.844094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.844119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.844242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.844282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.844436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.844636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.844666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.844823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.844848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.844992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.845016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.845165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.845190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.845330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.845364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.845570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.845596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.845738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.294 [2024-07-25 09:41:24.845778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.294 qpair failed and we were unable to recover it. 00:26:52.294 [2024-07-25 09:41:24.845926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.845950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.846060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.846085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.846201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.846225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.846370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.846398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.846530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.846558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.846659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.846690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.846808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.846833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.847004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.847030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.847188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.847212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.847421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.847445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.847571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.847596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.847722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.847747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.847897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.847921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.848083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.848108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.848277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.848302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.848470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.848495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.848693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.848718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.848847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.848873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.848987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.849012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.849189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.849213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.849344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.849379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.849533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.849559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.849739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.849763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.849930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.849954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.850115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.850140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.850314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.850339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.850482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.850514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.850607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.850633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.850786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.850825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.850926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.850956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.851118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.851143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.295 [2024-07-25 09:41:24.851285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.295 [2024-07-25 09:41:24.851309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.295 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.851512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.851537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.851674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.851699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.851899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.851924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.852096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.852136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.852287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.852312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.852483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.852508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.852738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.852764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.852897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.852921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.853046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.853071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.853276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.853300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.853445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.853471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.853647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.853672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.853800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.853827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.853987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.854026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.854166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.854190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.854396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.854421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.854569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.854594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.854785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.854812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.854934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.854958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.855101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.855139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.855245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.855270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.855363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.855395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.855522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.855548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.855718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.855742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.855900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.855924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.856040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.856075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.856252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.856276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.856495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.856520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.856680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.856725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.856843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.856867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.857011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.857036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.857203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.857243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.857407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.857432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.857611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.857637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.857815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.857854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.858017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.858041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.858194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.296 [2024-07-25 09:41:24.858219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.296 qpair failed and we were unable to recover it. 00:26:52.296 [2024-07-25 09:41:24.858361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.858387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.858502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.858526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.858661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.858688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.858881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.858906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.859041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.859079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.859219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.859258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.859445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.859471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.859653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.859678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.859881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.859914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.860063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.860087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.860213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.860237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.860369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.860395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.860525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.860550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.860694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.860719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.860875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.860899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.861957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.861982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.862147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.862172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.862375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.862401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.862508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.862534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.862675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.862700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.862888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.862913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.863059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.863084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.863268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.863293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.863436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.863468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.863564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.863594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.863737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.863762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.863887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.863912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.864121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.864146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.864277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.864314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.864490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.864516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.297 [2024-07-25 09:41:24.864653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.297 [2024-07-25 09:41:24.864694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.297 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.864911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.864939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.865074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.865099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.865215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.865240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.865340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.865370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.865611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.865636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.865756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.865781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.865870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.865902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.866033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.866067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.866216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.866241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.866418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.866444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.866626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.866664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.866815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.866840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.866970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.866995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.867966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.867991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.868107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.868240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.868387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.868520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.868684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.868828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.868970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.869119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.869308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.869497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.869612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.869748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.869926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.869959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.870107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.870136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.870323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.870348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.870459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.870483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.870653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.298 [2024-07-25 09:41:24.870680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.298 qpair failed and we were unable to recover it. 00:26:52.298 [2024-07-25 09:41:24.870838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.870863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.871009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.871034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.871222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.871246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.871389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.871415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.871528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.871553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.871712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.871736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.871917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.871942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.872070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.872104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.872410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.872435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.872588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.872613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.872785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.872818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.872976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.873203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.873353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.873554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.873681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.873799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.873973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.873998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.874104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.874129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.874215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.874241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.874346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.874376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.874504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.874530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.874679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.874725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.874876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.874901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.875030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.875054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.875188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.875213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.875397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.875423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.875520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.875545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.875667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.875713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.875877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.875917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.876004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.876029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.876144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.876169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.876291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.876316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.876504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.876537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.876629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.299 [2024-07-25 09:41:24.876654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.299 qpair failed and we were unable to recover it. 00:26:52.299 [2024-07-25 09:41:24.876777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.876802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.877920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.877960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.878103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.878128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.878325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.878350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.878495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.878521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.878636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.878660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.878820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.878845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.878988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.879013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.879131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.879156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.879409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.879435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.879570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.879595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.879750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.879776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.879946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.879982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.880094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.880134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.880318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.880342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.880516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.880542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.880725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.880749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.880911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.880934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.881149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.881177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.881341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.881369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.881546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.881572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.881713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.881740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.881895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.881919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.882010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.882034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.882167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.882195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.882409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.882434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.882571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.882595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.882703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.882731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.882855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.882879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.883031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.883070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.883181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.883209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.883333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.883387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.883521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.300 [2024-07-25 09:41:24.883545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.300 qpair failed and we were unable to recover it. 00:26:52.300 [2024-07-25 09:41:24.883706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.883734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.883890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.883913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.884091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.884128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.884277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.884305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.884521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.884545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.884684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.884712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.884865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.884893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.885111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.885142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.885250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.885278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.885426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.885451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.885592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.885631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.885789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.885817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.885972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.886000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.886218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.886241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.886387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.886427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.886546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.886570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.886699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.886723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.886847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.886885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.887033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.887061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.887205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.887228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.887440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.887464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.887627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.887666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.887848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.887871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.888038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.888067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.888226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.888263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.888367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.888391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.888525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.888549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.888679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.888707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.888828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.888852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.889027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.889068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.889253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.889281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.889406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.889431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.889558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.889582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.889749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.889777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.889910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.889948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.890135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.890162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.890294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.890321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.890452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.890478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.890635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.890676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.301 [2024-07-25 09:41:24.890830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.301 [2024-07-25 09:41:24.890858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.301 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.891035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.891057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.891208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.891236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.891459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.891501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.891644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.891668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.891827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.891855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.891947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.891975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.892095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.892119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.892324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.892352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.892582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.892610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.892771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.892793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.892971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.892999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.893138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.893174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.893295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.893319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.893453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.893478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.893580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.893608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.893788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.893812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.893988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.894016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.894173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.894201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.894398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.894436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.894581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.894609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.894765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.894793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.894894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.894917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.895078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.895101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.895223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.895251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.895380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.895577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.895618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.895866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.895894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.896043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.896066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.896289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.896317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.896464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.896489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.896620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.896660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.896786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.896828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.896980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.897007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.897192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.897215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.897366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.897394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.897528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.897556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.897740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.897773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.897935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.897963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.898105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.898314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.898336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.302 qpair failed and we were unable to recover it. 00:26:52.302 [2024-07-25 09:41:24.898522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.302 [2024-07-25 09:41:24.898551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.898692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.898720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.898885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.898911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.899080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.899108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.899213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.899246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.899410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.899436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.899600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.899628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.899789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.899817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.900032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.900055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.900237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.900264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.900460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.900489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.900656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.900680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.900849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.900876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.901020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.901047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.901198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.901226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.901379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.901419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.901604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.901628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.901771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.901809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.901994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.902022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.902161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.902189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.902311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.902335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.902527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.902567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.902794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.902821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.902946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.902969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.903127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.903150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.903324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.903352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.903540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.903578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.903701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.903742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.903890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.903918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.904043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.904067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.904229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.904270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.904453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.904482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.904657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.904695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.904877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.904920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.905044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.905072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.905333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.905361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.905524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.905552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.905733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.905760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.905893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.905932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.906082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.906124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.906278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.303 [2024-07-25 09:41:24.906306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.303 qpair failed and we were unable to recover it. 00:26:52.303 [2024-07-25 09:41:24.906467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.906492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.906712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.906740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.906881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.906910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.907049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.907185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.907342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.907539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.907738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.907863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.907998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.908022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.908156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.908180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.908348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.908384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.908607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.908630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.908751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.908779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.908954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.908988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.909148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.909180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.909350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.909387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.909536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.909563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.909738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.909761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.909925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.909952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.910116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.910144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.910246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.910269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.910434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.910458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.910618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.910646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.910865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.910897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.911041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.911068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.911268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.911295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.911472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.911497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.911657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.911689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.911841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.911869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.912094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.912116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.912291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.304 [2024-07-25 09:41:24.912319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.304 qpair failed and we were unable to recover it. 00:26:52.304 [2024-07-25 09:41:24.912473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.912501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.912631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.912673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.912766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.912789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.912953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.912981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.913177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.913200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.913369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.913410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.913523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.913547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.913711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.913734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.913845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.913887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.913992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.914020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.914171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.914194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.914388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.914417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.914551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.914578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.914730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.914766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.914899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.914937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.915063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.915090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.915213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.915237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.915398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.915438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.915610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.915638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.915796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.915819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.915999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.916027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.916161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.916189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.916361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.916386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.916498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.916537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.916699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.916727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.916853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.916891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.917043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.917084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.917229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.917257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.917432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.917456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.305 [2024-07-25 09:41:24.917680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.305 [2024-07-25 09:41:24.917707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.305 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.917845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.917872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.918016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.918054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.918173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.918196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.918370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.918415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.918567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.918590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.918732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.918759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.918900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.918932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.919062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.919085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.919181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.919204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.919381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.919410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.919572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.919595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.919743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.919782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.919951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.919983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.920152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.920184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.920302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.920343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.920477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.920506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.920665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.920689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.920866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.920889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.920992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.921026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.921153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.921176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.921291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.921315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.921498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.921526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.921690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.921714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.921822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.921846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.921994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.922022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.922205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.922228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.922389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.922428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.922553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.922581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.922715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.922758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.922933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.922961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.923167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.923195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.923321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.923367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.923531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.923554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.923734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.923762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.923924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.923947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.924121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.924149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.924250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.306 [2024-07-25 09:41:24.924286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.306 qpair failed and we were unable to recover it. 00:26:52.306 [2024-07-25 09:41:24.924395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.924420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.924611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.924651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.924803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.924831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.925040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.925062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.925228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.925256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.925478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.925509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.925612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.925635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.925826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.925854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.925966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.925994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.926104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.926131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.926264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.926288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.926413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.926442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.926620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.926658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.926810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.926838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.927024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.927051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.927210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.927232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.927386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.927428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.927601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.927629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.927829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.927852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.927972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.927999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.928164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.928192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.928388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.928427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.928550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.928578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.928716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.928744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.928905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.928943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.929151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.929186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.929319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.929346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.929457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.929481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.929616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.929640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.929776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.929803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.929950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.929987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.930166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.930196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.930352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.930388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.930515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.930552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.930661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.930685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.930815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.930843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.930954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.307 [2024-07-25 09:41:24.930978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.307 qpair failed and we were unable to recover it. 00:26:52.307 [2024-07-25 09:41:24.931088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.931112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.931246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.931274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.931483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.931508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.931647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.931675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.931849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.931877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.932057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.932080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.932168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.932192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.932337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.932374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.932501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.932525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.932672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.932696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.932920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.932956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.933121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.933144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.933325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.933364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.933499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.933528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.933652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.933690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.933781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.933804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.933976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.934003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.934182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.934205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.934414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.934442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.934603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.934630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.934808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.934841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.934981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.935009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.935123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.935150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.935318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.935346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.935517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.935549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.935693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.935720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.935866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.935904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.936077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.936105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.936264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.936292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.936454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.936479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.936632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.936676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.936804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.936831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.937079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.937112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.937257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.937293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.937419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.937444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.937532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.937557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.937758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.308 [2024-07-25 09:41:24.937781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.308 qpair failed and we were unable to recover it. 00:26:52.308 [2024-07-25 09:41:24.937937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.937965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.938060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.938094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.938278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.938316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.938488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.938517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.938633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.938657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.938824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.938872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.939026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.939054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.939180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.939217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.939369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.939411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.939534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.939562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.939725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.939763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.939958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.939986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.940112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.940140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.940427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.940452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.940585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.940612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.940854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.940896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.941014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.941037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.941195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.941218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.941445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.941480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.941617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.941640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.941853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.941881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.942034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.942062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.942279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.942307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.942441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.942466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.942559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.942583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.942740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.942939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.942967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.943083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.943110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.943380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.943404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.943553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.309 [2024-07-25 09:41:24.943586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.309 qpair failed and we were unable to recover it. 00:26:52.309 [2024-07-25 09:41:24.943703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.943731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.943837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.943861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.944110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.944144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.944296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.944324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.944508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.944532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.944706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.944733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.944834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.944862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.945019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.945042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.945151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.945174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.945310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.945338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.945470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.945494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.945633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.945656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.945836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.945864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.946019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.946041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.946213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.946240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.946397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.946426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.946556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.946594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.946716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.946740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.946886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.946914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.947093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.947116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.947253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.947295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.947447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.947472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.947653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.947677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.947849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.947877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.947975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.948011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.948141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.948168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.948315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.948338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.948566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.948597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.948700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.948723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.948901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.948939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.949042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.949070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.949230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.949253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.949431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.949460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.949653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.949681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.949970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.949993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.950150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.950178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.950339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.950374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.310 qpair failed and we were unable to recover it. 00:26:52.310 [2024-07-25 09:41:24.950476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.310 [2024-07-25 09:41:24.950501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.950687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.950726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.950908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.950936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.951117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.951139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.951261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.951304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.951456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.951485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.951598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.951622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.951779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.951802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.951991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.952024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.952178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.952206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.952371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.952413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.952559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.952584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.952691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.952715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.952868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.952906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.953038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.953065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.953227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.953266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.953379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.953421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.953567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.953727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.953766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.953951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.953979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.954134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.954161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.954406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.954439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.954569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.954596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.954754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.954782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.954951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.954974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.955153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.955181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.955315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.955343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.955524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.955548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.955721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.955752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.955845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.955873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.956013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.956045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.956197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.956251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.956455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.956484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.956639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.956672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.956905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.956933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.957159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.957187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.957426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.957449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.311 [2024-07-25 09:41:24.957577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.311 [2024-07-25 09:41:24.957601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.311 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.957817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.957844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.957978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.958016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.958237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.958264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.958476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.958500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.958632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.958656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.958813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.958841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.959076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.959104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.959279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.959302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.959484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.959512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.959707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.959735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.959914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.959937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.960101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.960123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.960295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.960323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.960488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.960512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.960638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.960682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.960827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.960854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.961063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.961086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.961261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.961289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.961468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.961498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.961669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.961691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.961882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.961910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.962121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.962149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.962382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.962406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.962542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.962570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.962772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.962799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.962984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.963009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.963224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.963252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.963438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.963466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.963603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.963628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.963820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.963848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.964059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.964091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.964284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.964312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.964495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.964522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.964655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.964686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.964886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.964911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.965069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.965097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.965281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.965309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.312 qpair failed and we were unable to recover it. 00:26:52.312 [2024-07-25 09:41:24.965485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.312 [2024-07-25 09:41:24.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.313 qpair failed and we were unable to recover it. 00:26:52.313 [2024-07-25 09:41:24.965611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.313 [2024-07-25 09:41:24.965636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.313 qpair failed and we were unable to recover it. 00:26:52.313 [2024-07-25 09:41:24.965796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.313 [2024-07-25 09:41:24.965824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.313 qpair failed and we were unable to recover it. 00:26:52.313 [2024-07-25 09:41:24.966047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.313 [2024-07-25 09:41:24.966070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.313 qpair failed and we were unable to recover it. 00:26:52.313 [2024-07-25 09:41:24.966289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.313 [2024-07-25 09:41:24.966316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.313 qpair failed and we were unable to recover it. 00:26:52.313 [2024-07-25 09:41:24.966469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.313 [2024-07-25 09:41:24.966499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.313 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.966655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.966695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.966903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.966928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.967071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.967097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.967264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.967289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.967472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.967498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.967608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.967649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.967789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.967814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.967933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.967958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.968139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.968167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.968328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.968352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.968458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.968484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.968676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.968704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.968823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.968863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.969094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.969122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.969351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.969387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.969559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.969584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.969760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.969788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.969946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.969974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.970142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.590 [2024-07-25 09:41:24.970166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.590 qpair failed and we were unable to recover it. 00:26:52.590 [2024-07-25 09:41:24.970369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.970398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.970533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.970560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.973538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.973583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.973791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.973822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.974044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.974073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.974241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.974265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.974438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.974467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.974696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.974724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.974858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.974886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.975106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.975134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.975315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.975343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.975544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.975569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.975779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.975807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.975898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.975926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.976114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.976137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.976374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.976415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.976579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.976603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.976844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.976867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.977101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.977129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.977346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.977384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.977626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.977650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.977813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.977841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.978085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.978114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.978326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.978371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.978569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.978597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.978819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.978847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.979084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.979108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.979241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.979270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.979489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.979518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.979715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.979738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.979890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.979918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.980122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.980150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.980320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.980346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.980529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.980558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.980774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.980802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.980995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.981018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.981193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.981222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.981410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.981438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.981608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.981631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.981802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.981830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.982002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.982030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.982245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.982273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.982407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.591 [2024-07-25 09:41:24.982431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.591 qpair failed and we were unable to recover it. 00:26:52.591 [2024-07-25 09:41:24.982582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.982621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.982843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.982866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.983088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.983116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.983336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.983373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.983597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.983621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.983747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.983780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.983960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.983988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.984177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.984210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.984428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.984458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.984628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.984656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.984853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.984875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.985114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.985142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.985364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.985392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.985624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.985820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.985855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.985987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.986015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.986149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.986172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.986315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.986338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.986546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.986574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.986715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.986753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.986946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.986974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.987199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.987227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.987380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.987404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.987550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.987592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.987760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.987788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.988008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.988030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.988158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.988185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.988399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.988424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.988540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.988564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.988818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.988967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.988996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.989223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.989251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.989456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.989485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.989655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.989683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.989858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.989892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.990118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.990146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.990318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.990346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.990506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.990531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.990718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.990758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.592 qpair failed and we were unable to recover it. 00:26:52.592 [2024-07-25 09:41:24.990916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.592 [2024-07-25 09:41:24.990945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.991120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.991143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.991324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.991352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.991573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.991601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.991818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.991840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.992012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.992040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.992214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.992247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.992399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.992424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.992641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.992669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.992867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.992896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.993083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.993106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.993299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.993327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.993504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.993532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.993712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.993758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.993915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.993943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.994066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.994094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.994278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.994306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.994442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.994471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.994649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.994677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.994829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.994851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.995051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.995079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.995252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.995281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.995459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.995483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.995704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.995732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.995890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.995918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.996140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.996162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.996351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.996388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.996621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.996649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.996835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.996858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.997100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.997127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.997348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.997384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.997570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.997594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.997736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.997764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.997921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.997949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.998118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.998141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.998334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.998371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.998542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.998570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.998733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.593 [2024-07-25 09:41:24.998755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.593 qpair failed and we were unable to recover it. 00:26:52.593 [2024-07-25 09:41:24.999003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:24.999031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:24.999194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:24.999230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:24.999445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:24.999469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:24.999699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:24.999727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:24.999934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:24.999962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.000184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.000207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.000347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.000402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.000560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.000584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.000814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.000841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.001007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.001035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.001211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.001239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.001404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.001428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.001596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.001624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.001769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.001797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.001997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.002020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.002185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.002214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.002349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.002389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.002563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.002588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.002735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.002763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.002986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.003015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.003228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.003251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.003490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.003519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.003711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.003740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.003925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.003948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.004138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.004167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.004297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.004325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.004557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.004581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.004721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.004750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.004888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.004916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.594 [2024-07-25 09:41:25.005040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.594 [2024-07-25 09:41:25.005063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.594 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.005227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.005266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.005383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.005412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.005574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.005598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.005700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.005734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.005901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.005929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.006106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.006136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.006282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.006310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.006543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.006568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.006692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.006730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.006911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.006939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.007117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.007145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.007382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.007407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.007637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.007664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.007857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.007885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.008110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.008132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.008305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.008333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.008551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.008579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.008767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.008790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.008942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.008964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.009157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.595 [2024-07-25 09:41:25.009185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.595 qpair failed and we were unable to recover it. 00:26:52.595 [2024-07-25 09:41:25.009413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.009438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.009615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.009643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.009853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.009881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.010053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.010076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.010246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.010273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.010509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.010537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.010770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.010793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.010960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.010987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.011207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.011235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.011427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.011451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.011596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.011624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.011740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.011767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.011992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.012015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.012240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.012268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.012494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.012519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.012728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.012752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.012928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.012952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.013144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.013172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.013409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.013435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.013591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.013616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.013759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.013786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.013988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.014013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.014195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.014219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.014409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.014435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.014617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.014656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.014813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.014841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.015018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.015055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.015265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.015289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.015448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.015472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.015648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.015673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.015892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.015916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.016101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.016146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.016374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.016400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.016578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.016612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.016818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.016846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.017065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.017093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.017276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.017300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.596 [2024-07-25 09:41:25.017533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.596 [2024-07-25 09:41:25.017559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.596 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.017721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.017746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.017944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.017970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.018123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.018148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.018373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.018398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.018582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.018607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.018779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.018803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.018973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.018997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.019157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.019197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.019413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.019439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.019645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.019685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.019906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.019930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.020048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.020073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.020292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.020317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.020452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.020478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.020669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.020695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.020924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.020949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.021104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.021128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.021328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.021352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.021582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.021608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.021790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.021835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.021975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.021999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.022128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.022153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.022308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.022333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.022453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.022478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.022629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.022654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.022808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.022832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.023096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.023120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.023353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.023390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.023571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.023596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.023798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.023823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.024066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.024093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.024281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.024309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.024569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.024595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.024819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.024844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.025026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.025065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.025242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.025267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.025478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.025504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.597 qpair failed and we were unable to recover it. 00:26:52.597 [2024-07-25 09:41:25.025717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.597 [2024-07-25 09:41:25.025742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.025901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.025926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.026120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.026145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.026302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.026466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.026492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.026693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.026717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.026854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.026880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.027099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.027124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.027306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.027330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.027538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.027563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.027766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.027790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.027957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.027995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.028169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.028194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.028405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.028446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.028623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.028648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.028825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.028850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.029019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.029044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.029249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.029274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.029478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.029504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.029632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.029656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.029901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.029929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.030117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.030141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.030375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.030401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.030554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.030579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.030741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.030765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.030948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.030973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.031150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.031175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.031371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.031399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.031626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.031651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.031864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.031888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.032059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.032101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.032286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.032311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.032472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.032498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.032721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.032746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.032896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.032921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.033029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.033054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.033242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.033265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.033503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.598 [2024-07-25 09:41:25.033530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.598 qpair failed and we were unable to recover it. 00:26:52.598 [2024-07-25 09:41:25.033759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.033784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.033939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.033964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.034112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.034137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.034372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.034398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.034607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.034632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.034842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.034867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.035049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.035074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.035178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.035203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.035411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.035438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.035624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.035649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.035866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.035890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.036062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.036088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.036251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.036275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.036482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.036508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.036713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.036740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.036922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.036950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.037180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.037219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.037442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.037468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.037669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.037694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.037929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.037954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.038141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.038166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.038398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.038424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.038641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.038666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.038876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.038916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.039159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.039198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.039372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.039398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.039580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.039604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.039819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.039847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.040034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.040073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.040262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.040286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.040454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.040481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.040646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.040671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.040831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.040876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.041101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.041126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.041372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.041412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.599 qpair failed and we were unable to recover it. 00:26:52.599 [2024-07-25 09:41:25.041645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.599 [2024-07-25 09:41:25.041670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.041814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.041839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.042021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.042044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.042170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.042203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.042375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.042401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.042643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.042668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.042849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.042882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.043049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.043077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.043195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.043234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.043369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.043395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.043602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.043627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.043843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.043868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.044017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.044042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.044198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.044223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.044484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.044509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.044711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.044736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.044944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.044969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.045193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.045219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.045425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.045451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.045670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.045695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.045869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.045894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.046066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.046091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.046313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.046337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.046462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.046487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.046703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.046728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.046900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.046940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.047164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.047190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.047349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.047395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.047624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.047650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.047865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.047889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.048079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.048104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.048269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.048294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.048451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.048477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.048690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.048715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.600 [2024-07-25 09:41:25.048923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.600 [2024-07-25 09:41:25.048948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.600 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.049094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.049119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.049300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.049325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.049555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.049585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.049743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.049768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.049901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.049926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.050059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.050084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.050294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.050318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.050551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.050577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.050722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.050746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.050936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.050976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.051134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.051159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.051383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.051409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.051571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.051596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.051772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.051797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.051928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.051953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.052175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.052200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.052379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.052405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.052588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.052614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.052830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.052855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.053075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.053114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.053323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.053348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.053548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.053742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.053767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.053986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.054011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.054257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.054295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.054523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.054549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.054740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.054781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.054960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.054985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.055186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.055212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.055398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.055424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.055560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.055585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.055801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.055826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.055994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.056018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.056230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.056255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.056482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.056508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.601 qpair failed and we were unable to recover it. 00:26:52.601 [2024-07-25 09:41:25.056690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.601 [2024-07-25 09:41:25.056731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.056894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.056919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.057149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.057188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.057414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.057440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.057665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.057706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.057947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.057972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.058154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.058179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.058297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.058325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.058550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.058575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.058802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.058827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.059055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.059094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.059280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.059305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.059501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.059526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.059670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.059694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.059898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.059923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.060111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.060135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.060379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.060406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.060619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.060644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.060880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.060904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.061130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.061155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.061346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.061496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.061640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.061666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.061820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.061859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.062105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.062130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.062348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.062381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.062606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.062631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.062878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.062906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.063038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.063066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.063226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.063250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.063404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.063435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.063587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.063612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.063789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.063813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.064050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.064075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.064314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.064339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.064600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.064626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.064787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.602 [2024-07-25 09:41:25.064812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.602 qpair failed and we were unable to recover it. 00:26:52.602 [2024-07-25 09:41:25.064983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.065008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.065246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.065272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.065425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.065451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.065579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.065604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.065827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.065852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.065989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.066014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.066164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.066204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.066368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.066394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.066573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.066598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.066758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.066782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.066980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.067003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.067233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.067262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.067404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.067443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.067652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.067675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.067869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.067897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.068059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.068087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.068304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.068327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.068527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.068552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.068741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.068769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.068943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.068966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.069158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.069186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.069370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.069398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.069625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.069648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.069840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.069867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.070021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.070049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.070282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.070305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.070549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.070574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.070744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.070772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.070987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.071010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.071156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.071184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.071411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.071436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.071610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.071648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.071859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.071887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.072071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.072099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.072300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.072323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.072557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.072582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.072713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.072741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.072962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.073009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.073247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.073270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.603 qpair failed and we were unable to recover it. 00:26:52.603 [2024-07-25 09:41:25.073500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.603 [2024-07-25 09:41:25.073529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.073725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.073753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.073968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.074017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.074239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.074262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.074466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.074494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.074679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.074708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.074936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.074984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.075113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.075136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.075372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.075401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.075571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.075598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.075789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.075835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.076059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.076081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.076278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.076309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.076496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.076524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.076691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.076742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.076920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.076942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.077129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.077155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.077373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.077401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.077566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.077593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.077716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.077753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.077931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.077971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.078153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.078180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.078363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.078390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.078593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.078619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.078755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.078780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.079000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.079028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.079258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.079286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.079457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.079482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.079695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.079721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.079887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.079913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.604 [2024-07-25 09:41:25.080062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.604 [2024-07-25 09:41:25.080087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.604 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.080293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.080321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.080492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.080517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.080666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.080691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.080827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.080852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.081060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.081085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.081264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.081289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.081443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.081469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.081635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.081660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.081871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.081910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.082091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.082134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.082375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.082417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.082702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.082726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.082945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.082968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.083192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.083234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.083469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.083495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.083683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.083722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.083918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.083968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.084179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.084219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.084446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.084471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.084723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.084776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.084997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.085048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.085274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.085302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.085540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.085565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.085765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.085814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.086049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.086098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.086351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.086384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.086618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.086656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.086876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.086921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.087133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.087182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.087372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.087398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.087607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.087630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.087834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.087884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.088095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.088142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.088293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.088317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.088551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.088593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.088849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.088895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.089101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.089151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.089372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.089399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.089625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.605 [2024-07-25 09:41:25.089664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.605 qpair failed and we were unable to recover it. 00:26:52.605 [2024-07-25 09:41:25.089875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.090170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.090219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.090425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.090450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.090650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.090691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.090893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.090916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.091181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.091230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.091450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.091494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.091718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.091761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.091898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.091944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.092196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.092245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.092480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.092522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.092687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.092728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.092899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.092922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.093120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.093143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.093260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.093284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.093523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.093567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.093823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.093871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.094131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.094179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.094417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.094443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.094617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.094658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.094894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.094943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.095191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.095238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.095431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.095465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.095709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.095750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.095939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.095984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.096180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.096203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.096435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.096477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.096637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.096680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.096828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.096882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.097099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.097140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.097384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.097409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.097587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.097632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.097827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.097874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.098131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.098172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.098310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.098348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.098562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.098586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.098826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.098876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.606 qpair failed and we were unable to recover it. 00:26:52.606 [2024-07-25 09:41:25.099069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.606 [2024-07-25 09:41:25.099110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.099322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.099368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.099554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.099578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.099755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.099803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.100042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.100082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.100289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.100312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.100554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.100580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.100754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.100805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.101029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.101069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.101284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.101307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.101467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.101491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.101719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.101767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.102006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.102047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.102227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.102250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.102444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.102486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.102733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.102781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.103020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.103061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.103246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.103278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.103505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.103546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.103766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.103816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.104037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.104077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.104288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.104311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.104554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.104579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.104820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.104865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.105109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.105150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.105290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.105317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.105522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.105564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.105710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.105733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.105924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.105965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.106175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.106215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.106373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.106412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.106561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.106606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.106846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.106875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.107034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.107063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.107259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.107287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.107481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.107506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.107717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.107745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.107964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.107992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.108210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.108237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.607 qpair failed and we were unable to recover it. 00:26:52.607 [2024-07-25 09:41:25.108467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.607 [2024-07-25 09:41:25.108490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.108746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.108795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.108937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.108964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.109123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.109150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.109389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.109413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.109577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.109600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.109834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.109861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.110047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.110075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.110266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.110293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.110517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.110541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.110765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.110792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.111004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.111032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.111172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.111215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.111444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.111468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.111612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.111651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.111862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.111889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.112093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.112141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.112352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.112385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.112647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.112675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.112900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.112927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.113053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.113091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.113314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.113341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.113566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.113590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.113826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.113854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.114081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.114128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.114314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.114341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.114541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.114569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.114752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.114779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.114991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.115013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.115157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.115196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.115415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.115439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.115670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.115697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.115893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.115915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.116114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.116160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.116377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.116417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.116566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.116590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.116797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.116819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.117013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.117063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.117288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-07-25 09:41:25.117314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.608 qpair failed and we were unable to recover it. 00:26:52.608 [2024-07-25 09:41:25.117530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.117554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.117791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.117814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.117996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.118052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.118210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.118238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.118422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.118447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.118663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.118685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.118891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.118940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.119154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.119181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.119415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.119443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.119672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.119694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.119927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.119976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.120196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.120223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.120432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.120461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.120686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.120708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.120944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.120992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.121168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.121195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.121428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.121456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.121692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.121714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.121866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.121915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.122076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.122103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.122290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.122317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.122548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.122571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.122810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.122858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.123107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.123134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.123311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.123339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.123490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.123514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.123763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.123811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.124024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.124056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.124271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.124298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.124488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.124512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.124746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.124794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.124988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.125015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.125196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.125223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.125390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.125413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.125651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.125678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.125902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.125929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.126143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.126170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.609 qpair failed and we were unable to recover it. 00:26:52.609 [2024-07-25 09:41:25.126385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.609 [2024-07-25 09:41:25.126423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.126589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.126612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.126842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.126869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.127089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.127116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.127297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.127324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.127556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.127580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.127747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.127775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.127963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.127990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.128181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.128204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.128385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.128425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.128618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.128655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.128841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.128868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.129087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.129109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.129331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.129364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.129581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.129608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.129831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.129858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.130019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.130041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.130218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.130253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.130430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.130458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.130678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.130705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.130910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.130932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.131160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.131207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.131402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.131430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.131650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.131677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.131875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.131897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.132133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.132180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.132408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.132436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.132651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.132679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.132853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.132875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.133119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.133170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.133393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.133421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.133645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.133673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.133905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.133928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.134136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.134186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.134371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.134399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.134512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.134540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.134679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.134716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.134855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.134895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.135108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.135135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.135365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.135393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.135579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.135602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.135830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.135876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.136103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.136131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.136315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.136342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.136526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.136550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.610 [2024-07-25 09:41:25.136773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.610 [2024-07-25 09:41:25.136821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.610 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.137045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.137072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.137282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.137309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.137525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.137550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.137798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.137847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.138067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.138094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.138275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.138302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.138523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.138547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.138741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.138795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.138950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.138977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.139195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.139222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.139412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.139435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.139682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.139725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.139928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.139955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.140151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.140178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.140349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.140382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.140608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.140633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.140863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.140891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.141118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.141146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.141338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.141371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.141599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.141623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.141869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.141897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.142006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.142033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.142273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.142296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.142533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.142561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.142783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.142811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.143039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.143066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.143277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.143299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.143539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.143567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.143775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.143803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.143963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.143990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.144165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.144379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.144420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.144563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.144590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.144800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.144827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.145047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.145069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.145304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.145332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.145551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.145805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.145832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.146025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.146048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.146280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.146307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.146530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.146558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.146736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.146763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.146955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.146978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.147168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.147217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.147452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.147480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.147645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.147673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.611 [2024-07-25 09:41:25.147885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.611 [2024-07-25 09:41:25.147908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.611 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.148064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.148111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.148353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.148398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.148627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.148655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.148840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.149033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.149059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.149238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.149265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.149491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.149519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.149725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.149748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.149915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.150001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.150216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.150243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.150381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.150409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.150564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.150603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.150815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.150864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.151096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.151123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.151251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.151278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.151431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.151469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.151683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.151729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.151942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.151969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.152184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.152212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.152448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.152471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.152645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.152692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.152869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.152896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.153088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.153115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.153274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.153301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.153480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.153505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.153697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.153724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.153948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.153975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.154195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.154217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.154416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.154440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.154665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.154692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.154869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.154896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.155078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.155101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.155324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.155351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.155590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.155617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.155831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.155858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.156012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.156035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.156257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.156284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.156512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.156540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.156730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.156758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.156947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.156969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.157167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.157213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.157390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.157418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.157559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.157587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.157817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.157839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.157999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.158051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.158199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.158228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.612 qpair failed and we were unable to recover it. 00:26:52.612 [2024-07-25 09:41:25.158412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.612 [2024-07-25 09:41:25.158440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.158667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.158689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.158905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.158954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.159177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.159205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.159453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.159481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.159623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.159646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.159880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.159926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.160088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.160115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.160371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.160399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.160552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.160575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.160783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.160830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.161033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.161060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.161221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.161249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.161471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.161495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.161739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.161766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.161963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.161990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.162167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.162195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.162422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.162445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.162687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.162738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.162916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.162943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.163131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.163158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.163362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.163390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.163611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.163651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.163836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.163863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.164024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.164052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.164285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.164308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.164554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.164582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.164794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.164821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.165038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.165065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.165248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.165270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.165462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.165491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.165671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.165698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.165870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.165897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.166112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.166135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.166362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.166390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.166571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.166599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.166767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.166794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.166953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.166975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.167202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.167255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.167493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.167518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.167752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.167779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.168002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.168024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.613 qpair failed and we were unable to recover it. 00:26:52.613 [2024-07-25 09:41:25.168252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.613 [2024-07-25 09:41:25.168279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.168487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.168515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.168728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.168756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.168945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.168968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.169170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.169224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.169495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.169523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.169755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.169782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.169993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.170016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.170246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.170273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.170487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.170515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.170735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.170763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.170981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.171003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.171298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.171325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.171499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.171527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.171709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.171737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.171952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.171974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.172214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.172263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.172488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.172516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.172744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.172772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.172948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.172971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.173213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.173262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.173487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.173516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.173720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.173747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.173909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.173931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.174160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.174208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.174346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.174380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.174598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.174625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.174809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.174831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.175006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.175054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.175270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.175297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.175475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.175503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.175685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.175922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.175969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.176190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.176218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.176443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.176467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.176654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.176676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.176835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.176890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.177114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.177142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.177373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.177402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.177620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.177643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.177912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.177962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.178137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.178165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.178317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.178344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.178527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.178550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.178770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.178820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.178999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.179026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.179242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.179270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.179507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.179530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.179774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.179824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.180048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.180076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.614 qpair failed and we were unable to recover it. 00:26:52.614 [2024-07-25 09:41:25.180298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.614 [2024-07-25 09:41:25.180326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.180525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.180548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.180789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.180837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.181059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.181086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.181309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.181337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.181575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.181598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.181824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.181874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.182100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.182127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.182326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.182353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.182539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.182563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.182803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.182851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.183061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.183088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.183311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.183338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.183609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.183646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.183866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.183912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.184140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.184167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.184350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.184400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.184613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.184651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.184843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.184890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.185081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.185109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.185286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.185313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.185492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.185516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.185649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.185673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.185842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.185869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.186088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.186115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.186340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.186375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.186560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.186587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.186804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.186831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.187068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.187095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.187317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.187344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.187541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.187565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.187793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.187820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.188011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.188038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.188228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.188256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.188483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.188507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.188745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.188772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.188947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.188975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.189169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.189191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.189434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.189679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.189706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.189926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.189954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.190188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.190210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.190435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.190462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.190641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.190669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.190883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.190911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.191135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.191158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.191340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.191374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.191564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.191591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.191808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.191835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.192052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.192074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.192299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.615 [2024-07-25 09:41:25.192326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.615 qpair failed and we were unable to recover it. 00:26:52.615 [2024-07-25 09:41:25.192543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.192568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.192745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.192772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.192988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.193010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.193233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.193260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.193502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.193531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.193753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.193780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.193912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.193949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.194165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.194219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.194371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.194399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.194619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.194647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.194875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.194897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.195130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.195177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.195346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.195380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.195551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.195579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.195737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.195759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.195963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.196019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.196249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.196277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.196510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.196538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.196728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.196750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.196982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.197032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.197258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.197286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.197424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.197452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.197609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.197646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.197824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.197876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.198089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.198116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.198361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.198389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.198613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.198650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.198880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.198929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.199140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.199167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.199368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.199396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.199615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.199639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.199838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.199902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.200151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.200178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.200367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.200395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.200606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.200628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.200821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.200871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.201088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.201115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.201280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.201307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.201494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.201518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.201763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.201812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.202009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.202036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.202261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.202288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.202519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.202542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.202771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.616 [2024-07-25 09:41:25.202829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.616 qpair failed and we were unable to recover it. 00:26:52.616 [2024-07-25 09:41:25.203001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.203029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.203245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.203272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.203445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.203468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.203669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.203721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.203894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.203922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.204133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.204160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.204332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.204369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.204554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.204578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.204742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.204769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.204933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.204960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.205135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.205157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.205384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.205416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.205632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.205659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.205885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.205912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.206121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.206144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.206386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.206414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.206629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.206657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.206804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.206831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.207019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.207041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.207260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.207287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.207461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.207489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.207670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.207697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.207912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.207934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.208167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.208216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.208438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.208462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.208685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.208712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.208929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.208951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.209203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.209250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.209482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.209510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.209736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.209763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.209979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.210002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.210189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.210216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.210404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.210432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.210647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.210674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.210858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.210880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.211107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.211154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.211386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.211414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.211573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.211601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.211816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.211839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.212025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.212071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.212289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.212316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.212512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.212536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.212744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.213005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.213052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.213232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.213259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.213466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.213494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.213718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.213741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.213979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.214026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.214184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.214211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.214372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.214417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.214611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.617 [2024-07-25 09:41:25.214636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.617 qpair failed and we were unable to recover it. 00:26:52.617 [2024-07-25 09:41:25.214832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.214886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.215073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.215101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.215319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.215347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.215550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.215573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.215811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.215857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.216070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.216097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.216279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.216307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.216463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.216487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.216734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.216790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.217009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.217036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.217253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.217280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.217502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.217525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.217752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.217799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.218019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.218047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.218266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.218293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.218459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.218483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.218679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.218741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.218975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.219003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.219225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.219252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.219467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.219490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.219724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.219773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.219988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.220015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.220152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.220180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.220414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.220437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.220612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.220652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.220818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.220846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.221002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.221030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.221253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.221281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.221451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.221475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.221668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.221709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.221883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.221910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.222178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.222200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.222448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.222496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.222720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.222748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.222938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.222965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.223126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.223148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.223267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.223290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.223533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.223560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.223735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.223762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.223949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.223971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.224164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.224197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.224384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.224412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.224629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.224656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.224880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.224903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.225077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.225126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.225283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.225310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.225535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.225560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.225772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.226036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.226084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.226264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.226291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.226513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.226542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.226669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.226706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.226868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.226909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.618 [2024-07-25 09:41:25.227133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-25 09:41:25.227160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.618 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.227352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.227387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.227611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.227648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.227825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.227873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.228088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.228115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.228298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.228325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.228501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.228525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.228722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.228781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.228998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.229025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.229208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.229235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.229447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.229470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.229656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.229703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.229922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.229950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.230140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.230167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.230396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.230421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.230564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.230591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.230748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.230776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.231005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.231032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.231255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.231277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.231462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.231491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.231680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.231707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.231919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.231946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.232173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.232196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.232432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.232480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.232624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.232651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.232827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.232854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.233079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.233101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.233332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.233370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.233563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.233587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.233771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.233799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.233969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.233992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.234222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.234250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.234469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.234497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.234657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.234684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.234901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.234923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.235160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.235207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.235428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.235456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.235665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.235692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.235873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.235895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.236096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.236144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.236371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.236399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.236560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.236588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.236808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.236830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.237020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.237068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.237257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.237475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.237503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.619 [2024-07-25 09:41:25.237721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-25 09:41:25.237743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.619 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.237984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.238033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.238262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.238290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.238515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.238544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.238766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.238788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.239021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.239070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.239296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.239323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.239550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.239574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.239801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.239823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.240024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.240074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.240296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.240322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.240554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.240579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.240730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.240752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.240935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.240986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.241168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.241195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.241380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.241408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.241631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.241653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.241889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.241939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.242103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.242130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.242307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.242334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.242529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.242552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.242785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.242837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.243057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.243084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.243298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.243326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.243517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.243540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.243739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.243790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.243979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.244006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.244229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.244277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.244507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.244531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.244675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.244736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.244930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.244956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.245095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.245122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.245341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.245385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.245603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.245630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.245846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.245873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.246048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.246075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.246285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.246306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.246506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.246534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.246754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.246781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.246963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.246990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.247210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.247232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.247469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.247518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.247691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.247718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.247929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.247956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.248138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.248161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.248406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.248430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.248646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.248669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.248900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.248927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.249133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.249158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.249365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.249393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.249589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.249613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.249796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.249823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.249980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-25 09:41:25.250002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.620 qpair failed and we were unable to recover it. 00:26:52.620 [2024-07-25 09:41:25.250213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.250261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.250447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.250475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.250641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.250667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.250851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.250873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.251071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.251118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.251277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.251304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.251496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.251519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.251740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.251762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.252003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.252051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.252284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.252312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.252507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.252531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.252722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.252744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.252944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.252991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.253209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.253237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.253416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.253444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.253674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.253697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.253932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.253979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.254190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.254217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.254344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.254378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.254560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.254584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.254820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.254870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.255096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.255132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.255303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.255330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 629523 Killed "${NVMF_APP[@]}" "$@" 00:26:52.621 [2024-07-25 09:41:25.255509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.255538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.255734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.255790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:52.621 [2024-07-25 09:41:25.255977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.256005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:52.621 [2024-07-25 09:41:25.256220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.256248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:52.621 [2024-07-25 09:41:25.256458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.256482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.621 [2024-07-25 09:41:25.256637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.256665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.256824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.256851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.256991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.257018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.257158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.257197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.257324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.257348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.257511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.257538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.257677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.257705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.257831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.257870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.258021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.258055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.258281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.258308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.258465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.258490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.258622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.258662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.258809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.258836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.259008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.259036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.259200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.259228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.259378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.259403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.259576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.259604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.259777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.259804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.259949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.259977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.260118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.260156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.260293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.260334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.260510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.260538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=630055 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 630055 00:26:52.621 [2024-07-25 09:41:25.260717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.621 [2024-07-25 09:41:25.260744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.621 qpair failed and we were unable to recover it. 00:26:52.621 [2024-07-25 09:41:25.260847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.260871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 630055 ']' 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.622 [2024-07-25 09:41:25.261028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.261052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.622 [2024-07-25 09:41:25.261230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.261258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.622 [2024-07-25 09:41:25.261443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.261471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.622 [2024-07-25 09:41:25.261594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.261619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.261775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.261799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.261977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.262005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.262141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.262167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.262329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.262373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.262516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.262542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.262667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.262691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.262863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.262890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.263030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.263069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.263222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.263258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.263445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.263482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.263661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.263698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.263871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.263905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.264091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.264123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.264285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.264313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.264469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.264495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.264646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.264681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.264733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf34230 (9): Bad file descriptor 00:26:52.622 [2024-07-25 09:41:25.264932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.264971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.265133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.265161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.265340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.265402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.265551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.265577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.265710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.265735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.265856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.265881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.266964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.266989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.267108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.267134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.267258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.267283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.267436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.267462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.267587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.267612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.267759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.267785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.267912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.267937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.268037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.268062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.268200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.268228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.268385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.268430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.268583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.268609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.268783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.268809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.268955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.268980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.269099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.269125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.269239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.269266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.269384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.269410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.269564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.269590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.269747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.269771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.269936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.269961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.622 [2024-07-25 09:41:25.270111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.622 [2024-07-25 09:41:25.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.622 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.270250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.270275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.270442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.270468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.270615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.270641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.270754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.270799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.270953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.270978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.271076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.271103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.271225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.271250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.271371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.271406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.271510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.271536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.271691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.271717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.271861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.271886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.272020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.272044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.272185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.272210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.272331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.272365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.272523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.272549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.272695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.272720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.272835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.272860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.273018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.273043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.273158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.273186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.273302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.273330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.273494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.273522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.273666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.273707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.273867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.273893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.274064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.274243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.274419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.274560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.274707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.274863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.274992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.275125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.275267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.275424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.275597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.275737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.275923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.275949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.276937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.276962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.277086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.277117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.277241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.277266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.277410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.277437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.277559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.277584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.277704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.277729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.277876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.277901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.278022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.278048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.278194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.623 [2024-07-25 09:41:25.278218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.623 qpair failed and we were unable to recover it. 00:26:52.623 [2024-07-25 09:41:25.278336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.278375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.278526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.278552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.278676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.278702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.278852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.278878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.279026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.279051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.279198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.279224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece0000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.279365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.279401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.279504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.279536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.279696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.279728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.279877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.279902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.280048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.280196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.280349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.280529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.280712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.280886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.280976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.281126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.281276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.281457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.281650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.281776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.281956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.281982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.282146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.282291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.282444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.282557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.282711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.282862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.282996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.283863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.283984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.284132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.284304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.284458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.284630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.284773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.284915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.284942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.285061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.285086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.285218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.285244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.285352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.285388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.285545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.285695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.285726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.285859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.285885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.286960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.286985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.287104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.287130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.287288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.287314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.624 qpair failed and we were unable to recover it. 00:26:52.624 [2024-07-25 09:41:25.287469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.624 [2024-07-25 09:41:25.287497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.287610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.287651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.287814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.287840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.288970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.289114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.289139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.289261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.289286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.289401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.289426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.289550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.289575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.289696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.289720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.289869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.289895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.290070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.290243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.290363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.290536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.290679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.290853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.290998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.291138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.291281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.291411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.291601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.291764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.291907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.291938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.292910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.292935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.293867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.293892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.294898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.294923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.295049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.295074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.295192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.295217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.295368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.295393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.295516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.295541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.295676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.295701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.295818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.295847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.296004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.296030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.296149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.296174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.296294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.296319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.296443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.296469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.296605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.625 [2024-07-25 09:41:25.296629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.625 qpair failed and we were unable to recover it. 00:26:52.625 [2024-07-25 09:41:25.296738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.296762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.296874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.296899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.297904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.297930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.298920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.298959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.299116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.299141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.299259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.299283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.299400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.299426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.299525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.299550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.299696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.299721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.299838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.299863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.300922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.300958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.301087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.301124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.301296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.301335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.301480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.301508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.301614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.301641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.301761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.301786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.301925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.301952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.302959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.302992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.303103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.303132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.303256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.303289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.303414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.303441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.303550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.303577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.303710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.303736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.303891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.303922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.304038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.304065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.304196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.304222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.304342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.304377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.304507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.304533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.304628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.626 [2024-07-25 09:41:25.304810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.626 [2024-07-25 09:41:25.304844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.626 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.305966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.305991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.306929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.306954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.307104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.307274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.307425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.307550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.307695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.307865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.307984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.308133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.308275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.308433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.308583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.308727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.308835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.308861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.309011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.309035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.309168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.892 [2024-07-25 09:41:25.309193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.892 qpair failed and we were unable to recover it. 00:26:52.892 [2024-07-25 09:41:25.309278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.309303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.309399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.309425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.309513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.309537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.309619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.309644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.309768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.309792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.309912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.309941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.310934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.310959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.311947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.311973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.312839] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:26:52.893 [2024-07-25 09:41:25.312931] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.893 [2024-07-25 09:41:25.312942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.312966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.313117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.313141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.313288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.313312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.313412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.313437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.313541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.313566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.313690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.313719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.313866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.313891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.893 [2024-07-25 09:41:25.314025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.893 [2024-07-25 09:41:25.314049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.893 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.314212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.314237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.314361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.314386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.314468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.314492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.314596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.314621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.314738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.314763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.314889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.314914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.315058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.315097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.315214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.315239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.315372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.315521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.315546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.315691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.315716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.315845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.315879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.316865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.316890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.317883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.317993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.318963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.318988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.319131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.319156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.894 [2024-07-25 09:41:25.319270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.894 [2024-07-25 09:41:25.319309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.894 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.319423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.319448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.319544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.319570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.319677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.319705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.319831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.319857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.320830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.320855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.321858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.321883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.322850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.322976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.323946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.323971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.324118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.324142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.324266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.324291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.324383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.324408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.895 [2024-07-25 09:41:25.324512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.895 [2024-07-25 09:41:25.324539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.895 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.324661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.324687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.324825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.324864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.325972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.325998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.326909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.326934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.327918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.328090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.328249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.328402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.328527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.328710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.328862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.328975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.329001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.329149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.329175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.329360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.329386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.896 [2024-07-25 09:41:25.329504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.896 [2024-07-25 09:41:25.329529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.896 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.329686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.329711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.329807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.329832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.329990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.330882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.330998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.331974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.331999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.332140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.332165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.332309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.332334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.332432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.332466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.332568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.332595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.332741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.332768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.332890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.332917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.333935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.333959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.334121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.334146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.334249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.897 [2024-07-25 09:41:25.334276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.897 qpair failed and we were unable to recover it. 00:26:52.897 [2024-07-25 09:41:25.334406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.334432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.334529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.334554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.334653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.334677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.334808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.334832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.334984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.335136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.335299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.335438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.335564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.335706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.335863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.335887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.336828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.336866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.337915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.337938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.338122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.338291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.338415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.338534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.338672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.338818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.338977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.339001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.339089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.339112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.339224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.339246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.339367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.339391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.339495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.898 [2024-07-25 09:41:25.339523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.898 qpair failed and we were unable to recover it. 00:26:52.898 [2024-07-25 09:41:25.339615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.339653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.339820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.339842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.339937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.339959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.340093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.340115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.340220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.340244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.340378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.340403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.340528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.340551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.340705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.340727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.340869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.340892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.341050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.341205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.341389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.341538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.341694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.341826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.341985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.342970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.342993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.343139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.343276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.343426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.343531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.343701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.343832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.343981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.344154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.344305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.344419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.344570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.344750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.344931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.344953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.345035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.899 [2024-07-25 09:41:25.345058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.899 qpair failed and we were unable to recover it. 00:26:52.899 [2024-07-25 09:41:25.345216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.345238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.345343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.345391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.345500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.345525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fece8000b90 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.345670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.345707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.345867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.345891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fecf0000b90 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.346878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.346915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.347905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.347927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.348876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.348899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.349892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.349914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.900 qpair failed and we were unable to recover it. 00:26:52.900 [2024-07-25 09:41:25.350053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.900 [2024-07-25 09:41:25.350076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.350185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.350208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.350323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.350345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.350440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.350463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.350554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.350576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.350727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.350750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.350888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.350910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.351849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.351981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.352936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.352958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.353097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.353236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.353388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.353539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.353715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.353849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.353979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.354002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.354127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.354149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.354240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.354262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.354367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.354391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.901 [2024-07-25 09:41:25.354487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.901 [2024-07-25 09:41:25.354510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.901 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.354658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.354695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.354818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.354840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.354964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.354986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.355154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.355322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.902 [2024-07-25 09:41:25.355461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.355578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.355705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.355842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.355972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.355994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.356976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.356999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.357910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.357931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.358032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.358053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.358211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.358234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.358353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.358381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.358475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.902 [2024-07-25 09:41:25.358498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.902 qpair failed and we were unable to recover it. 00:26:52.902 [2024-07-25 09:41:25.358601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.903 [2024-07-25 09:41:25.358623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.903 qpair failed and we were unable to recover it. 00:26:52.903 [2024-07-25 09:41:25.358741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.903 [2024-07-25 09:41:25.358763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.903 qpair failed and we were unable to recover it. 00:26:52.903 [2024-07-25 09:41:25.358868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.903 [2024-07-25 09:41:25.358890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.903 qpair failed and we were unable to recover it. 00:26:52.903 [2024-07-25 09:41:25.359030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.903 [2024-07-25 09:41:25.359053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26250 with addr=10.0.0.2, port=4420 00:26:52.903 [2024-07-25 09:41:25.390087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.903 [2024-07-25 09:41:25.506014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.903 [2024-07-25 09:41:25.506069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.903 [2024-07-25 09:41:25.506083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.903 [2024-07-25 09:41:25.506094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.903 [2024-07-25 09:41:25.506104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.903 [2024-07-25 09:41:25.506189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:52.903 [2024-07-25 09:41:25.506253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:52.903 [2024-07-25 09:41:25.506318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:52.903 [2024-07-25 09:41:25.506320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.161 Malloc0 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:53.161 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.162 [2024-07-25 09:41:25.692585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.162 [2024-07-25 09:41:25.720853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.162 09:41:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 629645 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.833618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.833766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.833808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.833825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.833839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.833876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.843490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.843582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.843609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.843624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.843637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.843668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.853525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.853623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.853651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.853666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.853679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.853715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.863552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.863651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.863677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.863692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.863704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.863734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.873513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.873604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.873631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.873646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.873659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.873689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.883519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.883611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.883638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.883653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.883665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.883696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.162 [2024-07-25 09:41:25.893549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.162 [2024-07-25 09:41:25.893640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.162 [2024-07-25 09:41:25.893667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.162 [2024-07-25 09:41:25.893682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.162 [2024-07-25 09:41:25.893695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.162 [2024-07-25 09:41:25.893725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.162 qpair failed and we were unable to recover it. 00:26:53.421 [2024-07-25 09:41:25.903628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.421 [2024-07-25 09:41:25.903776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.421 [2024-07-25 09:41:25.903807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.421 [2024-07-25 09:41:25.903822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.421 [2024-07-25 09:41:25.903834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.421 [2024-07-25 09:41:25.903865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.421 qpair failed and we were unable to recover it. 00:26:53.421 [2024-07-25 09:41:25.913601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.421 [2024-07-25 09:41:25.913735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.421 [2024-07-25 09:41:25.913761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.421 [2024-07-25 09:41:25.913776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.421 [2024-07-25 09:41:25.913789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.421 [2024-07-25 09:41:25.913819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.421 qpair failed and we were unable to recover it. 00:26:53.421 [2024-07-25 09:41:25.923694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.421 [2024-07-25 09:41:25.923800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.421 [2024-07-25 09:41:25.923826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.421 [2024-07-25 09:41:25.923840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.421 [2024-07-25 09:41:25.923853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.421 [2024-07-25 09:41:25.923883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.421 qpair failed and we were unable to recover it. 00:26:53.421 [2024-07-25 09:41:25.933635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.421 [2024-07-25 09:41:25.933720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.421 [2024-07-25 09:41:25.933745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.421 [2024-07-25 09:41:25.933760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.421 [2024-07-25 09:41:25.933772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.421 [2024-07-25 09:41:25.933803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.421 qpair failed and we were unable to recover it. 00:26:53.421 [2024-07-25 09:41:25.943708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.421 [2024-07-25 09:41:25.943817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.421 [2024-07-25 09:41:25.943844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.421 [2024-07-25 09:41:25.943858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.421 [2024-07-25 09:41:25.943870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:25.943907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:25.953770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:25.953877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:25.953904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:25.953919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:25.953931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:25.953961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:25.963757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:25.963858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:25.963883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:25.963898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:25.963910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:25.963940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:25.973762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:25.973862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:25.973888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:25.973903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:25.973915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:25.973946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:25.983762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:25.983867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:25.983892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:25.983907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:25.983919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:25.983949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:25.993809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:25.993922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:25.993948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:25.993963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:25.993975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:25.994004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.003929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.004072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.004098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.004113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.004125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.004154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.013868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.013968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.013995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.014009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.014022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.014052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.023886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.023991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.024016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.024031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.024043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.024073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.033929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.034030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.034056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.034071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.034089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.034120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.043984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.044084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.044110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.044125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.044137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.044166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.053996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.054116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.054142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.054157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.054169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.054199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.064020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.064137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.064162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.064177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.064189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.064220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.422 [2024-07-25 09:41:26.074039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.422 [2024-07-25 09:41:26.074145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.422 [2024-07-25 09:41:26.074170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.422 [2024-07-25 09:41:26.074184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.422 [2024-07-25 09:41:26.074196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.422 [2024-07-25 09:41:26.074226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.422 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.084054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.084143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.084167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.084181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.084193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.084223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.094076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.094180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.094206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.094221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.094233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.094263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.104113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.104219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.104245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.104259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.104272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.104301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.114151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.114259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.114285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.114300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.114312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.114341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.124203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.124299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.124326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.124345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.124365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.124397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.134175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.134276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.134302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.134316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.134329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.134367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.423 [2024-07-25 09:41:26.144197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.423 [2024-07-25 09:41:26.144307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.423 [2024-07-25 09:41:26.144333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.423 [2024-07-25 09:41:26.144347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.423 [2024-07-25 09:41:26.144368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.423 [2024-07-25 09:41:26.144400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.423 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.154252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.682 [2024-07-25 09:41:26.154339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.682 [2024-07-25 09:41:26.154373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.682 [2024-07-25 09:41:26.154388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.682 [2024-07-25 09:41:26.154400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.682 [2024-07-25 09:41:26.154430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.682 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.164263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.682 [2024-07-25 09:41:26.164402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.682 [2024-07-25 09:41:26.164428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.682 [2024-07-25 09:41:26.164443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.682 [2024-07-25 09:41:26.164455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.682 [2024-07-25 09:41:26.164485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.682 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.174291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.682 [2024-07-25 09:41:26.174430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.682 [2024-07-25 09:41:26.174456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.682 [2024-07-25 09:41:26.174471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.682 [2024-07-25 09:41:26.174483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.682 [2024-07-25 09:41:26.174513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.682 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.184384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.682 [2024-07-25 09:41:26.184474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.682 [2024-07-25 09:41:26.184499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.682 [2024-07-25 09:41:26.184514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.682 [2024-07-25 09:41:26.184526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.682 [2024-07-25 09:41:26.184555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.682 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.194433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.682 [2024-07-25 09:41:26.194531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.682 [2024-07-25 09:41:26.194557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.682 [2024-07-25 09:41:26.194572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.682 [2024-07-25 09:41:26.194584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.682 [2024-07-25 09:41:26.194613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.682 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.204396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.682 [2024-07-25 09:41:26.204530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.682 [2024-07-25 09:41:26.204555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.682 [2024-07-25 09:41:26.204570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.682 [2024-07-25 09:41:26.204582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.682 [2024-07-25 09:41:26.204612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.682 qpair failed and we were unable to recover it. 00:26:53.682 [2024-07-25 09:41:26.214427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.214518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.214548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.214564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.214576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.214606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.224464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.224559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.224584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.224599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.224611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.224640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.234485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.234574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.234599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.234614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.234626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.234656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.244515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.244632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.244657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.244672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.244684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.244714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.254557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.254645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.254671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.254686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.254698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.254727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.264592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.264681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.264706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.264721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.264733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.264763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.274601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.274690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.274715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.274730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.274742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.274772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.284619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.284731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.284756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.284769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.284781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.284811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.294658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.294760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.294784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.294798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.294811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.294840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.304697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.304811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.304842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.304858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.304870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.304900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.314764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.314915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.314939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.314953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.314965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.314995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.324843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.324970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.324996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.325010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.325022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.325051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.334772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.683 [2024-07-25 09:41:26.334869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.683 [2024-07-25 09:41:26.334895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.683 [2024-07-25 09:41:26.334910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.683 [2024-07-25 09:41:26.334923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.683 [2024-07-25 09:41:26.334952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.683 qpair failed and we were unable to recover it. 00:26:53.683 [2024-07-25 09:41:26.344809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.344917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.344943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.344957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.344969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.345005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.684 [2024-07-25 09:41:26.354834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.354935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.354961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.354976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.354987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.355017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.684 [2024-07-25 09:41:26.364845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.364946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.364971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.364986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.364998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.365028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.684 [2024-07-25 09:41:26.374871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.374968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.374994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.375008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.375020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.375050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.684 [2024-07-25 09:41:26.384924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.385053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.385079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.385093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.385105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.385134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.684 [2024-07-25 09:41:26.395018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.395123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.395154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.395169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.395181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.395211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.684 [2024-07-25 09:41:26.405005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.684 [2024-07-25 09:41:26.405120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.684 [2024-07-25 09:41:26.405145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.684 [2024-07-25 09:41:26.405159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.684 [2024-07-25 09:41:26.405172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.684 [2024-07-25 09:41:26.405201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.684 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.415003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.415101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.415126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.415140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.415153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.415182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.424999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.425108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.425133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.425148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.425160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.425190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.435053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.435177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.435203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.435218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.435236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.435267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.445064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.445164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.445190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.445204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.445216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.445246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.455091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.455192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.455218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.455233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.455245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.455275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.465121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.465223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.465248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.465263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.465275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.465305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.475165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.475267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.943 [2024-07-25 09:41:26.475292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.943 [2024-07-25 09:41:26.475306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.943 [2024-07-25 09:41:26.475319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.943 [2024-07-25 09:41:26.475348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.943 qpair failed and we were unable to recover it. 00:26:53.943 [2024-07-25 09:41:26.485165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.943 [2024-07-25 09:41:26.485271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.485300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.485315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.485327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.485366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.495226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.495331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.495365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.495382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.495395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.495425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.505218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.505343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.505381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.505397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.505410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.505440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.515475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.515626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.515652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.515667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.515679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.515710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.525372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.525462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.525487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.525507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.525520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.525550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.535361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.535447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.535472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.535487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.535499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.535529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.545410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.545516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.545541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.545556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.545567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.545597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.555390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.555481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.555507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.555522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.555535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.555565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.565404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.565496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.565523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.565537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.565549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.565579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.575450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.575538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.575563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.575577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.575589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.575620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.585483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.585576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.585602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.585617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.585629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.585658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.595485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.595572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.595596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.595610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.595622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.595652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.605568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.605689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.605714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.944 [2024-07-25 09:41:26.605729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.944 [2024-07-25 09:41:26.605741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.944 [2024-07-25 09:41:26.605771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.944 qpair failed and we were unable to recover it. 00:26:53.944 [2024-07-25 09:41:26.615668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.944 [2024-07-25 09:41:26.615791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.944 [2024-07-25 09:41:26.615816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.945 [2024-07-25 09:41:26.615836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.945 [2024-07-25 09:41:26.615849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.945 [2024-07-25 09:41:26.615878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.945 qpair failed and we were unable to recover it. 00:26:53.945 [2024-07-25 09:41:26.625578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.945 [2024-07-25 09:41:26.625686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.945 [2024-07-25 09:41:26.625712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.945 [2024-07-25 09:41:26.625727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.945 [2024-07-25 09:41:26.625739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.945 [2024-07-25 09:41:26.625769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.945 qpair failed and we were unable to recover it. 00:26:53.945 [2024-07-25 09:41:26.635604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.945 [2024-07-25 09:41:26.635722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.945 [2024-07-25 09:41:26.635746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.945 [2024-07-25 09:41:26.635760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.945 [2024-07-25 09:41:26.635772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.945 [2024-07-25 09:41:26.635802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.945 qpair failed and we were unable to recover it. 00:26:53.945 [2024-07-25 09:41:26.645637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.945 [2024-07-25 09:41:26.645726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.945 [2024-07-25 09:41:26.645752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.945 [2024-07-25 09:41:26.645766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.945 [2024-07-25 09:41:26.645778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.945 [2024-07-25 09:41:26.645808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.945 qpair failed and we were unable to recover it. 00:26:53.945 [2024-07-25 09:41:26.655678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.945 [2024-07-25 09:41:26.655773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.945 [2024-07-25 09:41:26.655801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.945 [2024-07-25 09:41:26.655815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.945 [2024-07-25 09:41:26.655828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.945 [2024-07-25 09:41:26.655858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.945 qpair failed and we were unable to recover it. 00:26:53.945 [2024-07-25 09:41:26.665694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.945 [2024-07-25 09:41:26.665800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.945 [2024-07-25 09:41:26.665825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.945 [2024-07-25 09:41:26.665840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.945 [2024-07-25 09:41:26.665852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:53.945 [2024-07-25 09:41:26.665881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.945 qpair failed and we were unable to recover it. 00:26:53.945 [2024-07-25 09:41:26.675806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.945 [2024-07-25 09:41:26.675941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.675966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.675981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.675993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.676023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.685779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.685908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.685933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.685948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.685960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.685989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.695780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.695877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.695903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.695918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.695930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.695959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.705822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.705929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.705959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.705975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.705987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.706016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.715844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.715942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.715966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.715980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.715992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.716023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.725898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.726004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.726029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.726044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.726056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.726086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.735899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.735999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.736025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.736039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.736052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.736082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.746023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.746125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.746152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.746166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.746178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.746214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.755932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.756034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.756060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.756075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.756087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.756117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.765969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.766097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.766123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.766137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.766149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.204 [2024-07-25 09:41:26.766179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 09:41:26.776038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.204 [2024-07-25 09:41:26.776134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.204 [2024-07-25 09:41:26.776160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.204 [2024-07-25 09:41:26.776175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.204 [2024-07-25 09:41:26.776186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.776216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.786054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.786160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.786185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.786200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.786212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.786242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.796079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.796197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.796228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.796243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.796255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.796285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.806078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.806205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.806231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.806246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.806258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.806287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.816097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.816194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.816220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.816235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.816247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.816277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.826155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.826248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.826272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.826286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.826299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.826329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.836159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.836270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.836296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.836310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.836328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.836367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.846198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.846313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.846339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.846354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.846374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.846405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.856202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.856308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.856334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.856348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.856371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.856403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.866366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.866462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.866488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.866502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.866514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.866544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.876284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.876389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.876416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.876430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.876442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.876472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.886363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.886468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.886494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.886508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.886521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.886551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.896351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.896462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.896488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.896503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.896515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.896545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.906442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.205 [2024-07-25 09:41:26.906535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.205 [2024-07-25 09:41:26.906560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.205 [2024-07-25 09:41:26.906575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.205 [2024-07-25 09:41:26.906587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.205 [2024-07-25 09:41:26.906617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 09:41:26.916469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.206 [2024-07-25 09:41:26.916564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.206 [2024-07-25 09:41:26.916590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.206 [2024-07-25 09:41:26.916604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.206 [2024-07-25 09:41:26.916616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.206 [2024-07-25 09:41:26.916646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 09:41:26.926449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.206 [2024-07-25 09:41:26.926537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.206 [2024-07-25 09:41:26.926562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.206 [2024-07-25 09:41:26.926581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.206 [2024-07-25 09:41:26.926594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.206 [2024-07-25 09:41:26.926624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 09:41:26.936466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.206 [2024-07-25 09:41:26.936591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.206 [2024-07-25 09:41:26.936617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.206 [2024-07-25 09:41:26.936632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.206 [2024-07-25 09:41:26.936644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.206 [2024-07-25 09:41:26.936675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.464 [2024-07-25 09:41:26.946545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.464 [2024-07-25 09:41:26.946671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.464 [2024-07-25 09:41:26.946696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.464 [2024-07-25 09:41:26.946711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.464 [2024-07-25 09:41:26.946723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.464 [2024-07-25 09:41:26.946753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-07-25 09:41:26.956529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.464 [2024-07-25 09:41:26.956616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.464 [2024-07-25 09:41:26.956644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.464 [2024-07-25 09:41:26.956659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.464 [2024-07-25 09:41:26.956672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:26.956702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:26.966569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:26.966656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:26.966680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:26.966695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:26.966707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:26.966737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:26.976633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:26.976748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:26.976773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:26.976787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:26.976799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:26.976829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:26.986697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:26.986800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:26.986826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:26.986840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:26.986853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:26.986883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:26.996650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:26.996740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:26.996766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:26.996781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:26.996793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:26.996823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.006726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.006827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.006853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.006867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.006879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.006910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.016705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.016807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.016832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.016852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.016865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.016895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.026804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.026910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.026935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.026950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.026963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.026993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.036761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.036861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.036888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.036903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.036915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.036946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.046877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.046991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.047017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.047032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.047045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.047074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.056810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.056939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.056964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.056979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.056991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.057021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.066832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.066933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.066958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.066973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.066985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.067015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.076887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.465 [2024-07-25 09:41:27.077002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.465 [2024-07-25 09:41:27.077028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.465 [2024-07-25 09:41:27.077042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.465 [2024-07-25 09:41:27.077055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.465 [2024-07-25 09:41:27.077086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-07-25 09:41:27.086932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.087036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.087061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.087075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.087087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.087117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.097009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.097109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.097135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.097149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.097162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.097192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.107040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.107174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.107205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.107220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.107232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.107263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.117022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.117125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.117151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.117166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.117178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.117208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.127061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.127159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.127183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.127198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.127210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.127240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.137071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.137186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.137212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.137226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.137239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.137269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.147097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.147204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.147229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.147244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.147256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.147292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.157152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.157278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.157303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.157318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.157331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.157370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.167167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.167265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.167291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.167305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.167318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.167347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.177180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.177289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.177315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.177330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.177342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.177382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-07-25 09:41:27.187219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.466 [2024-07-25 09:41:27.187328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.466 [2024-07-25 09:41:27.187353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.466 [2024-07-25 09:41:27.187376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.466 [2024-07-25 09:41:27.187389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.466 [2024-07-25 09:41:27.187419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.197271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.197365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.197396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.197411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.197424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.197453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.207275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.207382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.207408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.207422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.207434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.207464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.217327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.217431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.217460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.217475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.217487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.217517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.227347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.227451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.227476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.227491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.227504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.227533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.237363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.237456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.237482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.237496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.237514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.237554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.247402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.247492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.247518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.247533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.247545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.247575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.257434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.257527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.257552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.257566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.257579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.257609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.267464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.267557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.267583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.267598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.267610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.267640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.277474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.277561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.277587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.277601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.277614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.277643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.287512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.287602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.287636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.287650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.287662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.287693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.297528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.725 [2024-07-25 09:41:27.297612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.725 [2024-07-25 09:41:27.297640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.725 [2024-07-25 09:41:27.297655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.725 [2024-07-25 09:41:27.297667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.725 [2024-07-25 09:41:27.297696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-25 09:41:27.307582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.307674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.307699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.307714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.307726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.307756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.317595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.317693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.317717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.317731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.317743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.317773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.327640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.327728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.327752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.327766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.327787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.327817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.337673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.337774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.337800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.337814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.337826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.337857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.347735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.347836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.347862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.347877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.347889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.347918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.357713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.357819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.357845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.357859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.357871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.357902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.367774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.367861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.367886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.367900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.367912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.367942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.377749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.377850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.377875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.377890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.377902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.377931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.387816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.387930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.387955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.387969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.387981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.388011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.397868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.397973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.397999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.398013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.398025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.398055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.407822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.407958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.407982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.407996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.408009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.408038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.417874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.417977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.418002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.418022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.418035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.418065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.427903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.428042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.428067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.726 [2024-07-25 09:41:27.428082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.726 [2024-07-25 09:41:27.428094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.726 [2024-07-25 09:41:27.428125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-25 09:41:27.437927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.726 [2024-07-25 09:41:27.438033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.726 [2024-07-25 09:41:27.438058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.727 [2024-07-25 09:41:27.438073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.727 [2024-07-25 09:41:27.438085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.727 [2024-07-25 09:41:27.438115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-25 09:41:27.447936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.727 [2024-07-25 09:41:27.448084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.727 [2024-07-25 09:41:27.448109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.727 [2024-07-25 09:41:27.448123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.727 [2024-07-25 09:41:27.448135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.727 [2024-07-25 09:41:27.448165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.985 [2024-07-25 09:41:27.457983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.985 [2024-07-25 09:41:27.458080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.985 [2024-07-25 09:41:27.458105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.985 [2024-07-25 09:41:27.458120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.985 [2024-07-25 09:41:27.458132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.985 [2024-07-25 09:41:27.458162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.985 qpair failed and we were unable to recover it. 00:26:54.985 [2024-07-25 09:41:27.468059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.985 [2024-07-25 09:41:27.468184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.985 [2024-07-25 09:41:27.468210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.985 [2024-07-25 09:41:27.468224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.985 [2024-07-25 09:41:27.468236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.985 [2024-07-25 09:41:27.468266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.985 qpair failed and we were unable to recover it. 00:26:54.985 [2024-07-25 09:41:27.478039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.985 [2024-07-25 09:41:27.478173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.985 [2024-07-25 09:41:27.478199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.985 [2024-07-25 09:41:27.478213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.985 [2024-07-25 09:41:27.478225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.985 [2024-07-25 09:41:27.478256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.985 qpair failed and we were unable to recover it. 00:26:54.985 [2024-07-25 09:41:27.488077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.985 [2024-07-25 09:41:27.488182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.985 [2024-07-25 09:41:27.488207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.985 [2024-07-25 09:41:27.488222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.985 [2024-07-25 09:41:27.488234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.985 [2024-07-25 09:41:27.488263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.985 qpair failed and we were unable to recover it. 00:26:54.985 [2024-07-25 09:41:27.498087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.985 [2024-07-25 09:41:27.498189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.985 [2024-07-25 09:41:27.498214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.985 [2024-07-25 09:41:27.498228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.985 [2024-07-25 09:41:27.498241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.985 [2024-07-25 09:41:27.498271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.985 qpair failed and we were unable to recover it. 00:26:54.985 [2024-07-25 09:41:27.508131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.985 [2024-07-25 09:41:27.508235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.508266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.508282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.508294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.508323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.518160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.518302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.518327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.518342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.518354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.518395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.528207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.528334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.528374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.528391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.528404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.528434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.538213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.538341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.538376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.538391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.538403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.538433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.548249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.548381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.548407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.548421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.548433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.548468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.558239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.558341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.558374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.558390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.558402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.558432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.568294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.568415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.568444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.568459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.568471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.568500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.578284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.578408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.578433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.578447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.578459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.578489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.588328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.588498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.588524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.588539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.588551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.588582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.598372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.598463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.598493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.598508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.598521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.598551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.608427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.608517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.608541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.608555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.608567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.608597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.618464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.618558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.618584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.618598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.618609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.618639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.628454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.628543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.628568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.628582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.986 [2024-07-25 09:41:27.628594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.986 [2024-07-25 09:41:27.628624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.986 qpair failed and we were unable to recover it. 00:26:54.986 [2024-07-25 09:41:27.638469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.986 [2024-07-25 09:41:27.638556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.986 [2024-07-25 09:41:27.638582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.986 [2024-07-25 09:41:27.638597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.638609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.638646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.648512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.648641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.648667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.648681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.648693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.648723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.658565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.658652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.658680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.658695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.658707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.658737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.668592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.668681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.668705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.668719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.668731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.668761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.678591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.678720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.678745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.678760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.678772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.678802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.688630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.688719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.688744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.688760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.688772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.688801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.698685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.698802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.698827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.698842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.698854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.698884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:54.987 [2024-07-25 09:41:27.708680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.987 [2024-07-25 09:41:27.708806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.987 [2024-07-25 09:41:27.708831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.987 [2024-07-25 09:41:27.708845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.987 [2024-07-25 09:41:27.708857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:54.987 [2024-07-25 09:41:27.708887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:54.987 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.718697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.718787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.718813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.718828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.718840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.718869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.728780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.728886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.728912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.728926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.728944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.728974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.738799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.738899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.738925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.738940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.738952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.738982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.748816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.748964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.748990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.749004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.749016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.749045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.758850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.758950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.758975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.758990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.759002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.759032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.768892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.769015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.769040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.769055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.769067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.769097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.778895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.778995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.779020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.779035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.779047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.779077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.788933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.789074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.789098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.789113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.789126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.789156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.798973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.799075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.799101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.799115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.799127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.799157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.808976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.809113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.809138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.809153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.246 [2024-07-25 09:41:27.809165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.246 [2024-07-25 09:41:27.809194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.246 qpair failed and we were unable to recover it. 00:26:55.246 [2024-07-25 09:41:27.818984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.246 [2024-07-25 09:41:27.819087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.246 [2024-07-25 09:41:27.819113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.246 [2024-07-25 09:41:27.819133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.819146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.819176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.829070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.829191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.829215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.829230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.829242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.829272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.839084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.839191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.839216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.839230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.839243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.839272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.849091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.849175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.849200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.849214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.849226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.849256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.859105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.859227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.859252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.859266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.859279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.859309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.869179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.869308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.869333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.869348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.869370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.869400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.879178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.879326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.879352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.879376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.879390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.879420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.889194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.889325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.889350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.889373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.889386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.889416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.899220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.899317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.899343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.899365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.899379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.899410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.909263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.909391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.909422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.909438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.909450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.909480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.919381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.919474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.919500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.919514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.919526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.919557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.929315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.929429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.929454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.929469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.929481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.929510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.939389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.939495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.939520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.939534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.939546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.247 [2024-07-25 09:41:27.939576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.247 qpair failed and we were unable to recover it. 00:26:55.247 [2024-07-25 09:41:27.949400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.247 [2024-07-25 09:41:27.949491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.247 [2024-07-25 09:41:27.949516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.247 [2024-07-25 09:41:27.949531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.247 [2024-07-25 09:41:27.949542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.248 [2024-07-25 09:41:27.949577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.248 qpair failed and we were unable to recover it. 00:26:55.248 [2024-07-25 09:41:27.959435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.248 [2024-07-25 09:41:27.959534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.248 [2024-07-25 09:41:27.959559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.248 [2024-07-25 09:41:27.959574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.248 [2024-07-25 09:41:27.959586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.248 [2024-07-25 09:41:27.959616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.248 qpair failed and we were unable to recover it. 00:26:55.248 [2024-07-25 09:41:27.969441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.248 [2024-07-25 09:41:27.969528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.248 [2024-07-25 09:41:27.969555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.248 [2024-07-25 09:41:27.969569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.248 [2024-07-25 09:41:27.969581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.248 [2024-07-25 09:41:27.969612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.248 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 09:41:27.979545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.507 [2024-07-25 09:41:27.979636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.507 [2024-07-25 09:41:27.979661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.507 [2024-07-25 09:41:27.979675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.507 [2024-07-25 09:41:27.979688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.507 [2024-07-25 09:41:27.979717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 09:41:27.989493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:27.989580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:27.989606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:27.989620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:27.989632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:27.989663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:27.999499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:27.999607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:27.999638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:27.999654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:27.999666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:27.999696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.009507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.009649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.009675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.009690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.009702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.009732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.019683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.019802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.019827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.019842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.019854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.019884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.029592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.029683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.029708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.029722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.029734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.029764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.039651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.039767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.039791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.039804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.039817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.039854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.049739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.049872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.049897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.049912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.049924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.049955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.059699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.059826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.059852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.059866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.059878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.059907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.069726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.069830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.069855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.069869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.069881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.069910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.079728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.079830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.079855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.079869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.079881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.079911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.089790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.089903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.089933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.089949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.089961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.089991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 09:41:28.099806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.508 [2024-07-25 09:41:28.099925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.508 [2024-07-25 09:41:28.099951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.508 [2024-07-25 09:41:28.099965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.508 [2024-07-25 09:41:28.099977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.508 [2024-07-25 09:41:28.100008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.109827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.109927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.109952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.109966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.109979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.110010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.119906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.120013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.120038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.120053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.120065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.120095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.129952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.130054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.130079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.130094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.130111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.130142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.139958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.140055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.140080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.140095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.140107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.140136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.149965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.150086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.150112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.150127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.150139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.150169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.160019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.160118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.160144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.160159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.160172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.160202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.170073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.170169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.170195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.170210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.170223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.170253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.180022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.180148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.180174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.180189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.180202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.180232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.190100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.190200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.190226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.190240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.190252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.190282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.200116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.200236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.200261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.200276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.200288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.200318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.210101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.210215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.210240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.210254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.210267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.210297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.220193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.509 [2024-07-25 09:41:28.220295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.509 [2024-07-25 09:41:28.220320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.509 [2024-07-25 09:41:28.220341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.509 [2024-07-25 09:41:28.220354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.509 [2024-07-25 09:41:28.220393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 09:41:28.230210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.510 [2024-07-25 09:41:28.230312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.510 [2024-07-25 09:41:28.230338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.510 [2024-07-25 09:41:28.230354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.510 [2024-07-25 09:41:28.230375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.510 [2024-07-25 09:41:28.230405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.769 [2024-07-25 09:41:28.240254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.769 [2024-07-25 09:41:28.240370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.769 [2024-07-25 09:41:28.240396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.769 [2024-07-25 09:41:28.240411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.769 [2024-07-25 09:41:28.240423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.769 [2024-07-25 09:41:28.240452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.769 qpair failed and we were unable to recover it. 00:26:55.769 [2024-07-25 09:41:28.250235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.769 [2024-07-25 09:41:28.250333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.769 [2024-07-25 09:41:28.250372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.769 [2024-07-25 09:41:28.250388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.769 [2024-07-25 09:41:28.250400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.769 [2024-07-25 09:41:28.250442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.769 qpair failed and we were unable to recover it. 00:26:55.769 [2024-07-25 09:41:28.260284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.769 [2024-07-25 09:41:28.260390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.769 [2024-07-25 09:41:28.260416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.769 [2024-07-25 09:41:28.260431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.769 [2024-07-25 09:41:28.260443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.769 [2024-07-25 09:41:28.260473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.769 qpair failed and we were unable to recover it. 00:26:55.769 [2024-07-25 09:41:28.270308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.769 [2024-07-25 09:41:28.270437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.769 [2024-07-25 09:41:28.270462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.769 [2024-07-25 09:41:28.270477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.270489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.270519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.280334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.280473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.280499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.280514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.280526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.280559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.290390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.290476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.290504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.290519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.290530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.290560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.300418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.300506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.300531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.300545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.300557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.300587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.310497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.310602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.310627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.310647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.310660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.310689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.320448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.320533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.320556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.320570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.320582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.320611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.330475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.330575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.330599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.330613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.330625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.330654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.340590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.340674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.340698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.340712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.340724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.340754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.350565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.350662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.350688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.350702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.350715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.350744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.360595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.360683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.360708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.360723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.360735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.360765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.370629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.370748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.370772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.370786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.370798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.370828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.380677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.380780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.380806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.380821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.380832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.380862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.390699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.390820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.390844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.390859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.390871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.770 [2024-07-25 09:41:28.390901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.770 qpair failed and we were unable to recover it. 00:26:55.770 [2024-07-25 09:41:28.400676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.770 [2024-07-25 09:41:28.400780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.770 [2024-07-25 09:41:28.400810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.770 [2024-07-25 09:41:28.400825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.770 [2024-07-25 09:41:28.400837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.400867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.410655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.410768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.410792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.410806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.410819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.410848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.420738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.420837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.420863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.420878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.420890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.420919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.430782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.430882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.430906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.430920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.430932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.430962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.440787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.440887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.440913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.440927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.440939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.440984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.450809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.450917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.450943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.450957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.450970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.450999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.460934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.461056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.461081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.461096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.461108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.461138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.470942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.471048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.471073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.471088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.471100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.471130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.480989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.481102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.481128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.481142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.481154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.481184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.490889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.491005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.491035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.491050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.491063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:55.771 [2024-07-25 09:41:28.491093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:55.771 qpair failed and we were unable to recover it. 00:26:55.771 [2024-07-25 09:41:28.500969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.771 [2024-07-25 09:41:28.501094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.771 [2024-07-25 09:41:28.501119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.771 [2024-07-25 09:41:28.501133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.771 [2024-07-25 09:41:28.501145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.030 [2024-07-25 09:41:28.501175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.030 qpair failed and we were unable to recover it. 00:26:56.030 [2024-07-25 09:41:28.510997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.030 [2024-07-25 09:41:28.511112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.030 [2024-07-25 09:41:28.511138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.030 [2024-07-25 09:41:28.511153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.030 [2024-07-25 09:41:28.511165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.030 [2024-07-25 09:41:28.511195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.030 qpair failed and we were unable to recover it. 00:26:56.030 [2024-07-25 09:41:28.521126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.030 [2024-07-25 09:41:28.521271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.030 [2024-07-25 09:41:28.521296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.030 [2024-07-25 09:41:28.521311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.030 [2024-07-25 09:41:28.521323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.030 [2024-07-25 09:41:28.521352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.030 qpair failed and we were unable to recover it. 00:26:56.030 [2024-07-25 09:41:28.531049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.030 [2024-07-25 09:41:28.531147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.030 [2024-07-25 09:41:28.531173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.030 [2024-07-25 09:41:28.531187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.030 [2024-07-25 09:41:28.531205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.030 [2024-07-25 09:41:28.531235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.030 qpair failed and we were unable to recover it. 00:26:56.030 [2024-07-25 09:41:28.541107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.030 [2024-07-25 09:41:28.541240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.030 [2024-07-25 09:41:28.541266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.030 [2024-07-25 09:41:28.541280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.030 [2024-07-25 09:41:28.541292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.030 [2024-07-25 09:41:28.541322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.030 qpair failed and we were unable to recover it. 00:26:56.030 [2024-07-25 09:41:28.551154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.551265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.551291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.551306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.551318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.551347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.561140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.561246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.561271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.561286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.561298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.561328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.571163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.571264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.571289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.571303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.571315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.571344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.581137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.581256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.581280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.581294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.581306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.581335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.591182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.591315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.591340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.591366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.591381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.591412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.601255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.601363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.601388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.601402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.601415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.601445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.611296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.611411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.611438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.611452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.611464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.611494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.621284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.621410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.621436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.621456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.621469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.621499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.631340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.631462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.631488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.631503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.631515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.631545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.641321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.641430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.641456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.641471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.641483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.641513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.651385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.651483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.651509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.651524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.651536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.651566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.661380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.661472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.661498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.661512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.661525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.661555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.031 [2024-07-25 09:41:28.671461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.031 [2024-07-25 09:41:28.671592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.031 [2024-07-25 09:41:28.671617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.031 [2024-07-25 09:41:28.671632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.031 [2024-07-25 09:41:28.671644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.031 [2024-07-25 09:41:28.671674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.031 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.681468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.681553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.681582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.681597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.681610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.681639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.691458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.691550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.691575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.691589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.691601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.691630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.701564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.701667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.701693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.701707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.701719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.701749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.711549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.711659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.711684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.711704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.711717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.711757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.721541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.721631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.721658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.721673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.721685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.721715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.731576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.731676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.731701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.731716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.731728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.731759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.741610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.741698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.741724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.741739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.741751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.741781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.751667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.751784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.751810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.751825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.751837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.751867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.032 [2024-07-25 09:41:28.761650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.032 [2024-07-25 09:41:28.761797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.032 [2024-07-25 09:41:28.761823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.032 [2024-07-25 09:41:28.761837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.032 [2024-07-25 09:41:28.761850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.032 [2024-07-25 09:41:28.761891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.032 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.771687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.771802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.771827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.771840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.771852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.771882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.781725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.781829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.781855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.781869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.781881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.781910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.791832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.791938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.791963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.791978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.791990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.792020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.801799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.801902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.801934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.801950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.801962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.801993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.811829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.811930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.811956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.811970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.811983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.812012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.821874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.821978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.822004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.822019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.822031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.822061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.831957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.832068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.291 [2024-07-25 09:41:28.832092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.291 [2024-07-25 09:41:28.832106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.291 [2024-07-25 09:41:28.832119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.291 [2024-07-25 09:41:28.832148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.291 qpair failed and we were unable to recover it. 00:26:56.291 [2024-07-25 09:41:28.841954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.291 [2024-07-25 09:41:28.842084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.842111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.842126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.842138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.842173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.851988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.852085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.852110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.852125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.852137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.852166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.862008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.862106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.862133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.862148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.862160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.862190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.872079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.872184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.872209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.872223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.872235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.872265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.882125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.882260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.882286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.882301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.882313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.882343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.892110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.892213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.892244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.892259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.892271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.892302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.902129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.902275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.902301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.902316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.902328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.902366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.912104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.912219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.912245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.912260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.912272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.912301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.922173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.922286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.922312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.922327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.922339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.922377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.932187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.932286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.932311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.932326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.932343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.932382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.942172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.942276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.942302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.942316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.942328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.942368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.952286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.952405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.292 [2024-07-25 09:41:28.952430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.292 [2024-07-25 09:41:28.952445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.292 [2024-07-25 09:41:28.952457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.292 [2024-07-25 09:41:28.952487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.292 qpair failed and we were unable to recover it. 00:26:56.292 [2024-07-25 09:41:28.962278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.292 [2024-07-25 09:41:28.962387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:28.962413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:28.962427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:28.962439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:28.962469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.293 [2024-07-25 09:41:28.972311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.293 [2024-07-25 09:41:28.972442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:28.972468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:28.972483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:28.972495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:28.972524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.293 [2024-07-25 09:41:28.982371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.293 [2024-07-25 09:41:28.982469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:28.982495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:28.982509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:28.982522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:28.982552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.293 [2024-07-25 09:41:28.992391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.293 [2024-07-25 09:41:28.992484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:28.992509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:28.992523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:28.992535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:28.992565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.293 [2024-07-25 09:41:29.002408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.293 [2024-07-25 09:41:29.002498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:29.002524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:29.002538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:29.002550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:29.002580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.293 [2024-07-25 09:41:29.012418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.293 [2024-07-25 09:41:29.012502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:29.012530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:29.012545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:29.012557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:29.012588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.293 [2024-07-25 09:41:29.022485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.293 [2024-07-25 09:41:29.022609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.293 [2024-07-25 09:41:29.022634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.293 [2024-07-25 09:41:29.022648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.293 [2024-07-25 09:41:29.022666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.293 [2024-07-25 09:41:29.022697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.293 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.032497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.032590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.032614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.032628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.032641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.032671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.042502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.042592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.042616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.042630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.042642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.042672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.052571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.052666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.052691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.052705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.052717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.052747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.062615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.062748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.062772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.062786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.062798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.062828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.072608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.072733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.072757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.072771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.072784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.072814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.082679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.082812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.082836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.082850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.082863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.082893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.092717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.092853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.092878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.092892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.092905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.092934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.102713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.102846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.102871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.102886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.102898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.102927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.112731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.112835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.112860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.112880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.112893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.112923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.122766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.122870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.122896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.122910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.122922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.122952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.552 qpair failed and we were unable to recover it. 00:26:56.552 [2024-07-25 09:41:29.132765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.552 [2024-07-25 09:41:29.132873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.552 [2024-07-25 09:41:29.132899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.552 [2024-07-25 09:41:29.132914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.552 [2024-07-25 09:41:29.132926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.552 [2024-07-25 09:41:29.132955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.142824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.142923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.142949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.142963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.142976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.143006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.152844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.152953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.152978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.152993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.153005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.153036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.162811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.162912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.162938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.162953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.162965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.162994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.172896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.172995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.173020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.173034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.173046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.173076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.182917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.183028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.183053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.183068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.183080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.183110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.192947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.193051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.193077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.193091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.193104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.193134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.202982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.203085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.203116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.203132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.203144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.203175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.212989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.213087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.213112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.213127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.213139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.213168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.222988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.223084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.223110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.223124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.223137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.223167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.233095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.233200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.233224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.233238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.233251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.233281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.243056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.243182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.243208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.243223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.243235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.243271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.253076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.253191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.253216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.253231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.253243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.253272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.263087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.553 [2024-07-25 09:41:29.263188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.553 [2024-07-25 09:41:29.263214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.553 [2024-07-25 09:41:29.263228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.553 [2024-07-25 09:41:29.263240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.553 [2024-07-25 09:41:29.263270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.553 qpair failed and we were unable to recover it. 00:26:56.553 [2024-07-25 09:41:29.273124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.554 [2024-07-25 09:41:29.273263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.554 [2024-07-25 09:41:29.273289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.554 [2024-07-25 09:41:29.273304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.554 [2024-07-25 09:41:29.273316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.554 [2024-07-25 09:41:29.273346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.554 qpair failed and we were unable to recover it. 00:26:56.554 [2024-07-25 09:41:29.283187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.554 [2024-07-25 09:41:29.283281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.554 [2024-07-25 09:41:29.283307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.554 [2024-07-25 09:41:29.283322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.554 [2024-07-25 09:41:29.283334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.554 [2024-07-25 09:41:29.283372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.554 qpair failed and we were unable to recover it. 00:26:56.811 [2024-07-25 09:41:29.293177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.811 [2024-07-25 09:41:29.293278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.811 [2024-07-25 09:41:29.293309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.811 [2024-07-25 09:41:29.293324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.811 [2024-07-25 09:41:29.293337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.811 [2024-07-25 09:41:29.293377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.811 qpair failed and we were unable to recover it. 00:26:56.811 [2024-07-25 09:41:29.303210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.811 [2024-07-25 09:41:29.303332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.811 [2024-07-25 09:41:29.303365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.811 [2024-07-25 09:41:29.303383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.811 [2024-07-25 09:41:29.303395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.811 [2024-07-25 09:41:29.303425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.811 qpair failed and we were unable to recover it. 00:26:56.811 [2024-07-25 09:41:29.313216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.811 [2024-07-25 09:41:29.313323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.811 [2024-07-25 09:41:29.313348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.811 [2024-07-25 09:41:29.313373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.811 [2024-07-25 09:41:29.313387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.811 [2024-07-25 09:41:29.313417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.811 qpair failed and we were unable to recover it. 00:26:56.811 [2024-07-25 09:41:29.323279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.811 [2024-07-25 09:41:29.323397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.811 [2024-07-25 09:41:29.323422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.811 [2024-07-25 09:41:29.323437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.811 [2024-07-25 09:41:29.323449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.811 [2024-07-25 09:41:29.323480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.811 qpair failed and we were unable to recover it. 00:26:56.811 [2024-07-25 09:41:29.333272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.811 [2024-07-25 09:41:29.333403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.811 [2024-07-25 09:41:29.333429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.811 [2024-07-25 09:41:29.333444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.811 [2024-07-25 09:41:29.333455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.811 [2024-07-25 09:41:29.333491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.811 qpair failed and we were unable to recover it. 00:26:56.811 [2024-07-25 09:41:29.343362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.811 [2024-07-25 09:41:29.343450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.811 [2024-07-25 09:41:29.343476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.811 [2024-07-25 09:41:29.343491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.343503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.343533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.353446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.353559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.353585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.353599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.353612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.353652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.363455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.363546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.363570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.363584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.363596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.363634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.373456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.373550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.373576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.373591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.373603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.373633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.383467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.383566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.383592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.383606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.383618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.383648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.393482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.393583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.393608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.393623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.393635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.393665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.403572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.403690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.403715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.403730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.403742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.403772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.413562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.413652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.413678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.413692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.413704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.413734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.423567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.423651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.423675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.423689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.423707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.423738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.433658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.433767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.433793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.433808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.433820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.433850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.443672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.443814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.443840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.443854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.443866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.443896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.453678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.812 [2024-07-25 09:41:29.453793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.812 [2024-07-25 09:41:29.453817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.812 [2024-07-25 09:41:29.453831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.812 [2024-07-25 09:41:29.453843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.812 [2024-07-25 09:41:29.453873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.812 qpair failed and we were unable to recover it. 00:26:56.812 [2024-07-25 09:41:29.463716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.463815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.463840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.463854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.463866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.463896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.473783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.473891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.473917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.473932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.473944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.473974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.483828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.483965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.483991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.484005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.484023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.484053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.493837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.493981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.494006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.494021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.494033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.494064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.503796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.503914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.503939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.503954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.503966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.503996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.513869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.513992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.514017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.514038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.514051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.514081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.523889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.523997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.524023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.524037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.524049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.524079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.533908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.534011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.534037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.534052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.534064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.534093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:56.813 [2024-07-25 09:41:29.543954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.813 [2024-07-25 09:41:29.544056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.813 [2024-07-25 09:41:29.544082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.813 [2024-07-25 09:41:29.544096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.813 [2024-07-25 09:41:29.544109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:56.813 [2024-07-25 09:41:29.544138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.813 qpair failed and we were unable to recover it. 00:26:57.070 [2024-07-25 09:41:29.553987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.070 [2024-07-25 09:41:29.554096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.070 [2024-07-25 09:41:29.554122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.070 [2024-07-25 09:41:29.554136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.070 [2024-07-25 09:41:29.554148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.070 [2024-07-25 09:41:29.554178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.070 qpair failed and we were unable to recover it. 00:26:57.070 [2024-07-25 09:41:29.563994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.070 [2024-07-25 09:41:29.564118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.070 [2024-07-25 09:41:29.564144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.564158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.564170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.564202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.574026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.574168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.574194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.574208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.574221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.574250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.584033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.584133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.584158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.584172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.584184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.584215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.594057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.594161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.594187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.594203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.594215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.594246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.604080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.604195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.604227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.604243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.604255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.604292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.614231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.614334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.614370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.614387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.614400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.614430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.624162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.624270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.624296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.624310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.624323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.624353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.634245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.634353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.634388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.634402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.634415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.634456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.644231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.644345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.644379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.644395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.644407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.644442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.654290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.654408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.654433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.654447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.654460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.654489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.664306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.664417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.664442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.664457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.664469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.664499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.674367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.674482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.674508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.674522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.674534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.674564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.684406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.684502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.684528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.071 [2024-07-25 09:41:29.684543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.071 [2024-07-25 09:41:29.684555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.071 [2024-07-25 09:41:29.684585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.071 qpair failed and we were unable to recover it. 00:26:57.071 [2024-07-25 09:41:29.694379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.071 [2024-07-25 09:41:29.694473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.071 [2024-07-25 09:41:29.694504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.694520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.694532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.694563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.704364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.704470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.704496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.704510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.704522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.704552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.714435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.714528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.714553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.714568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.714580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.714610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.724467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.724568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.724594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.724608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.724620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.724650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.734523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.734612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.734637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.734652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.734663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.734698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.744555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.744667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.744693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.744707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.744719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.744749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.754606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.754744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.754768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.754782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.754795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.754825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.764583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.764682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.764707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.764722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.764734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.764763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.774603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.774691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.774717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.774731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.774743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.774774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.784628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.784759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.784788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.784803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.784815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.784845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.072 [2024-07-25 09:41:29.794680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.072 [2024-07-25 09:41:29.794829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.072 [2024-07-25 09:41:29.794854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.072 [2024-07-25 09:41:29.794868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.072 [2024-07-25 09:41:29.794880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.072 [2024-07-25 09:41:29.794911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.072 qpair failed and we were unable to recover it. 00:26:57.329 [2024-07-25 09:41:29.804685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.329 [2024-07-25 09:41:29.804773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.329 [2024-07-25 09:41:29.804799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.329 [2024-07-25 09:41:29.804814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.329 [2024-07-25 09:41:29.804826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.329 [2024-07-25 09:41:29.804855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.329 qpair failed and we were unable to recover it. 00:26:57.329 [2024-07-25 09:41:29.814733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.329 [2024-07-25 09:41:29.814860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.329 [2024-07-25 09:41:29.814885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.329 [2024-07-25 09:41:29.814900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.329 [2024-07-25 09:41:29.814912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.329 [2024-07-25 09:41:29.814943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.329 qpair failed and we were unable to recover it. 00:26:57.329 [2024-07-25 09:41:29.824729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.329 [2024-07-25 09:41:29.824847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.329 [2024-07-25 09:41:29.824873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.329 [2024-07-25 09:41:29.824887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.329 [2024-07-25 09:41:29.824905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.824936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.834778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.834888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.834912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.834926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.834938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.834968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.844806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.844917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.844943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.844958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.844970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.845000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.854799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.854887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.854913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.854927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.854940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.854969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.864858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.864957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.864982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.864997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.865009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.865038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.874903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.875023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.875049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.875064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.875076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.875105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.884955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.885055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.885081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.885095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.885108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.885137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.894948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.895091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.895117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.895132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.895144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.895173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.904954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.905089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.905116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.905131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.905143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.905173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.915002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.915129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.915155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.915175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.915188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.915217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.924995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.925120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.925146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.925160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.925173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.925203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.935047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.935171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.935197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.935211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.935223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.935253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.945162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.945305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.945331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.945345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.945367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.945399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.955116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.955222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.955248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.955263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.955276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.955305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.965109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.965212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.965238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.965253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.965265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.965294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.975201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.975311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.975337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.975351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.975373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.975404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.985160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.985260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.985286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.985300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.985313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.985342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:29.995183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:29.995289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:29.995314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:29.995329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:29.995342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:29.995380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:30.005233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:30.005326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:30.005352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:30.005382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:30.005395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:30.005426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:30.015247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:30.015346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:30.015380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:30.015395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:30.015407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:30.015437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:30.025392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:30.025525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:30.025553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:30.025568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:30.025581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:30.025613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:30.035442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:30.035564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.330 [2024-07-25 09:41:30.035590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.330 [2024-07-25 09:41:30.035605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.330 [2024-07-25 09:41:30.035618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.330 [2024-07-25 09:41:30.035648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.330 qpair failed and we were unable to recover it. 00:26:57.330 [2024-07-25 09:41:30.045373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.330 [2024-07-25 09:41:30.045470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.331 [2024-07-25 09:41:30.045496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.331 [2024-07-25 09:41:30.045512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.331 [2024-07-25 09:41:30.045524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.331 [2024-07-25 09:41:30.045555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.331 qpair failed and we were unable to recover it. 00:26:57.331 [2024-07-25 09:41:30.055382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.331 [2024-07-25 09:41:30.055472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.331 [2024-07-25 09:41:30.055498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.331 [2024-07-25 09:41:30.055513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.331 [2024-07-25 09:41:30.055525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.331 [2024-07-25 09:41:30.055555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.331 qpair failed and we were unable to recover it. 00:26:57.587 [2024-07-25 09:41:30.065439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.587 [2024-07-25 09:41:30.065528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.587 [2024-07-25 09:41:30.065554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.587 [2024-07-25 09:41:30.065569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.587 [2024-07-25 09:41:30.065581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.587 [2024-07-25 09:41:30.065610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.587 qpair failed and we were unable to recover it. 00:26:57.587 [2024-07-25 09:41:30.075482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.587 [2024-07-25 09:41:30.075582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.587 [2024-07-25 09:41:30.075607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.587 [2024-07-25 09:41:30.075622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.587 [2024-07-25 09:41:30.075633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.587 [2024-07-25 09:41:30.075663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.587 qpair failed and we were unable to recover it. 00:26:57.587 [2024-07-25 09:41:30.085487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.587 [2024-07-25 09:41:30.085577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.587 [2024-07-25 09:41:30.085602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.587 [2024-07-25 09:41:30.085616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.587 [2024-07-25 09:41:30.085628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.587 [2024-07-25 09:41:30.085658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.587 qpair failed and we were unable to recover it. 00:26:57.587 [2024-07-25 09:41:30.095527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.587 [2024-07-25 09:41:30.095634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.587 [2024-07-25 09:41:30.095664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.587 [2024-07-25 09:41:30.095679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.587 [2024-07-25 09:41:30.095691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.095721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.105536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.105621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.105645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.105659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.105671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.105701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.115681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.115790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.115816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.115830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.115842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.115872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.125639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.125736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.125763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.125777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.125789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.125818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.135658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.135760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.135784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.135798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.135810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.135845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.145680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.145833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.145858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.145873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.145885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.145915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.155762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.155866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.155892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.155906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.155919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.155948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.165694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.165797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.165823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.165838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.165850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.165880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.175711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.175832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.175857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.175871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.175883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.175912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.185760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.185885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.185918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.185933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.185946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.185975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.195817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.195924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.195950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.195965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.195977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.196006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.205864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.205966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.205992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.206006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.206018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.206048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.215852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.215962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.215987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.216001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.216014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.588 [2024-07-25 09:41:30.216043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.588 qpair failed and we were unable to recover it. 00:26:57.588 [2024-07-25 09:41:30.225896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.588 [2024-07-25 09:41:30.225983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.588 [2024-07-25 09:41:30.226009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.588 [2024-07-25 09:41:30.226024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.588 [2024-07-25 09:41:30.226041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.226072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.235907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.236011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.236037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.236051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.236063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.236093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.245914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.246012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.246039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.246053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.246065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.246096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.255963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.256065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.256091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.256106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.256118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.256147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.265969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.266073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.266098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.266113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.266125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.266155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.276019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.276138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.276164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.276178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.276190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.276219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.286020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.286120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.286146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.286160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.286172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.286202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.296057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.296197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.296223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.296237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.296249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.296279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.306106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.306230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.306256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.306270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.306282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.306312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.589 [2024-07-25 09:41:30.316128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.589 [2024-07-25 09:41:30.316229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.589 [2024-07-25 09:41:30.316255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.589 [2024-07-25 09:41:30.316274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.589 [2024-07-25 09:41:30.316287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.589 [2024-07-25 09:41:30.316318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.589 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.326152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.326253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.326277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.326291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.326303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.326333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.336171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.336276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.336301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.336315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.336327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.336365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.346222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.346327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.346353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.346376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.346389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.346419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.356251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.356375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.356401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.356416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.356428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.356458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.366267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.366393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.366419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.366434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.366446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.366476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.376282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.376392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.376418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.376433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.376446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.376476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.386332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.386440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.386466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.386481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.386494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.386524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.396379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.396469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.396494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.396508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.396521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.396551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.406421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.406519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.406545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.406565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.406578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.406608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.416484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.416578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.416603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.416618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.416630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.848 [2024-07-25 09:41:30.416660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.848 qpair failed and we were unable to recover it. 00:26:57.848 [2024-07-25 09:41:30.426486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.848 [2024-07-25 09:41:30.426575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.848 [2024-07-25 09:41:30.426600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.848 [2024-07-25 09:41:30.426615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.848 [2024-07-25 09:41:30.426627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.426657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.436502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.436591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.436616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.436630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.436643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.436672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.446504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.446591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.446617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.446633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.446646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.446676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.456560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.456652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.456677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.456692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.456704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.456735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.466659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.466765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.466794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.466809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.466821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.466851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.476699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.476822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.476847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.476861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.476873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.476903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.486722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.486848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.486873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.486887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.486900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.486929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.496641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.496730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.496760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.496775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.496788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.496817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.506784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.506948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.506974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.506989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.507001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.507031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.516720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.516869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.516894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.516909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.516921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.516950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.526734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.526847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.526872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.526887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.526899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.526928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.536766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.536886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.536912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.536926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.536938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.536973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.546814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.546952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.546978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.546992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.547004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.547034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.849 [2024-07-25 09:41:30.556916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.849 [2024-07-25 09:41:30.557020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.849 [2024-07-25 09:41:30.557045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.849 [2024-07-25 09:41:30.557059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.849 [2024-07-25 09:41:30.557071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.849 [2024-07-25 09:41:30.557101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.849 qpair failed and we were unable to recover it. 00:26:57.850 [2024-07-25 09:41:30.566892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.850 [2024-07-25 09:41:30.566991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.850 [2024-07-25 09:41:30.567016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.850 [2024-07-25 09:41:30.567031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.850 [2024-07-25 09:41:30.567043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.850 [2024-07-25 09:41:30.567073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.850 qpair failed and we were unable to recover it. 00:26:57.850 [2024-07-25 09:41:30.576866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.850 [2024-07-25 09:41:30.576949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.850 [2024-07-25 09:41:30.576973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.850 [2024-07-25 09:41:30.576987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.850 [2024-07-25 09:41:30.576999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:57.850 [2024-07-25 09:41:30.577028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.850 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.586985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.587084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.587113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.587128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.587140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.587177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.596998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.597102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.597127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.597141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.597153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.597183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.606960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.607066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.607092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.607107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.607119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.607151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.616971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.617127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.617153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.617167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.617183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.617213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.626988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.627084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.627108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.627122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.627140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.627170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.637081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.637189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.637214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.637229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.637241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.637271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.647092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.647234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.647260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.647274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.647286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.108 [2024-07-25 09:41:30.647324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.108 qpair failed and we were unable to recover it. 00:26:58.108 [2024-07-25 09:41:30.657156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.108 [2024-07-25 09:41:30.657256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.108 [2024-07-25 09:41:30.657288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.108 [2024-07-25 09:41:30.657302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.108 [2024-07-25 09:41:30.657314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.657344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.667149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.667248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.667274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.667288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.667300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.667329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.677189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.677298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.677324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.677338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.677351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.677389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.687209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.687345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.687380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.687395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.687407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.687440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.697191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.697317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.697343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.697366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.697380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.697411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.707228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.707336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.707368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.707385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.707397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.707426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.717372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.717464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.717490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.717504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.717522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.717552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.727321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.727431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.727457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.727472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.727484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.727514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.737290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.737417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.737443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.737458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.737470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.737499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.747418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.747506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.747531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.747546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.747558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.747588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.757484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.757579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.757605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.757619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.757631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.757662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.767454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.767537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.767562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.767575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.767588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.767618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.777565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.777660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.777690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.109 [2024-07-25 09:41:30.777705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.109 [2024-07-25 09:41:30.777717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.109 [2024-07-25 09:41:30.777746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.109 qpair failed and we were unable to recover it. 00:26:58.109 [2024-07-25 09:41:30.787511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.109 [2024-07-25 09:41:30.787641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.109 [2024-07-25 09:41:30.787667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.110 [2024-07-25 09:41:30.787681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.110 [2024-07-25 09:41:30.787693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.110 [2024-07-25 09:41:30.787726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.110 qpair failed and we were unable to recover it. 00:26:58.110 [2024-07-25 09:41:30.797538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.110 [2024-07-25 09:41:30.797630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.110 [2024-07-25 09:41:30.797655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.110 [2024-07-25 09:41:30.797669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.110 [2024-07-25 09:41:30.797681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.110 [2024-07-25 09:41:30.797711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.110 qpair failed and we were unable to recover it. 00:26:58.110 [2024-07-25 09:41:30.807572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.110 [2024-07-25 09:41:30.807708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.110 [2024-07-25 09:41:30.807733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.110 [2024-07-25 09:41:30.807753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.110 [2024-07-25 09:41:30.807766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.110 [2024-07-25 09:41:30.807797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.110 qpair failed and we were unable to recover it. 00:26:58.110 [2024-07-25 09:41:30.817652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.110 [2024-07-25 09:41:30.817769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.110 [2024-07-25 09:41:30.817794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.110 [2024-07-25 09:41:30.817808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.110 [2024-07-25 09:41:30.817821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.110 [2024-07-25 09:41:30.817858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.110 qpair failed and we were unable to recover it. 00:26:58.110 [2024-07-25 09:41:30.827632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.110 [2024-07-25 09:41:30.827750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.110 [2024-07-25 09:41:30.827774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.110 [2024-07-25 09:41:30.827788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.110 [2024-07-25 09:41:30.827800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.110 [2024-07-25 09:41:30.827830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.110 qpair failed and we were unable to recover it. 00:26:58.110 [2024-07-25 09:41:30.837664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.110 [2024-07-25 09:41:30.837806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.110 [2024-07-25 09:41:30.837831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.110 [2024-07-25 09:41:30.837846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.110 [2024-07-25 09:41:30.837858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.110 [2024-07-25 09:41:30.837893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.110 qpair failed and we were unable to recover it. 00:26:58.368 [2024-07-25 09:41:30.847705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.368 [2024-07-25 09:41:30.847835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.368 [2024-07-25 09:41:30.847860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.847874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.847886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.847921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.857688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.857798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.857822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.857836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.857848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.857877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.867711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.867813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.867837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.867851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.867863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.867892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.877746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.877853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.877878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.877892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.877905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.877934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.887756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.887861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.887887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.887901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.887913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.887944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.897862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.897963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.897993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.898009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.898021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.898051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.907841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.907943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.907968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.907983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.907995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.908025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.917874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.917979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.918004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.918018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.918030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.918061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.927876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.927979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.928004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.928019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.928031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.928061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.937941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.938040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.938065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.938080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.938092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.938129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.947961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.948071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.948096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.948111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.948123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.948153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.957997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.958122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.958148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.958163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.958175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.958205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.967993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.968096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.968121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.968136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.369 [2024-07-25 09:41:30.968148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.369 [2024-07-25 09:41:30.968177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.369 qpair failed and we were unable to recover it. 00:26:58.369 [2024-07-25 09:41:30.978017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.369 [2024-07-25 09:41:30.978161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.369 [2024-07-25 09:41:30.978187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.369 [2024-07-25 09:41:30.978202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:30.978214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:30.978244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:30.988052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:30.988168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:30.988199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:30.988214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:30.988226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:30.988256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:30.998084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:30.998184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:30.998210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:30.998224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:30.998237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:30.998266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.008166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.008271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.008297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.008312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.008324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.008363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.018135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.018245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.018271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.018285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.018297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.018327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.028153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.028250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.028276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.028291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.028303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.028338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.038230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.038385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.038411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.038426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.038438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.038469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.048249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.048411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.048437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.048452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.048464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.048494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.058267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.058374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.058400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.058415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.058427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.058456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.068269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.068381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.068407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.068421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.068433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.068464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.078302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.078420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.078446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.078461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.078473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.078503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.088305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.088428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.088453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.088467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.088479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.088509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.370 [2024-07-25 09:41:31.098392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.370 [2024-07-25 09:41:31.098485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.370 [2024-07-25 09:41:31.098511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.370 [2024-07-25 09:41:31.098525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.370 [2024-07-25 09:41:31.098537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.370 [2024-07-25 09:41:31.098567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.370 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.108458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.108549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.108573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.108588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.108600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.108631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.118431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.118528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.118553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.118568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.118586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.118617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.128526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.128619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.128644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.128659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.128671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.128701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.138485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.138591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.138617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.138631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.138643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.138672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.148610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.148739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.148764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.148778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.148790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.148821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.158606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.158733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.158757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.158771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.158783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.158813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.168584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.168677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.168702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.168717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.168729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.168758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.178659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.178783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.629 [2024-07-25 09:41:31.178807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.629 [2024-07-25 09:41:31.178822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.629 [2024-07-25 09:41:31.178834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.629 [2024-07-25 09:41:31.178864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.629 qpair failed and we were unable to recover it. 00:26:58.629 [2024-07-25 09:41:31.188628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.629 [2024-07-25 09:41:31.188763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.188787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.188801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.188813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.188843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.198707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.198814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.198840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.198854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.198866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.198898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.208717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.208860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.208885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.208905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.208918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.208947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.218736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.218842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.218868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.218882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.218894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.218924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.228726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.228822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.228847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.228861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.228874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.228903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.238798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.238915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.238941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.238955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.238967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.239003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.248855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.249006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.249032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.249047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.249059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.249088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.258847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.258949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.258974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.258989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.259001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.259030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.268883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.269029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.269054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.269068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.269080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.269111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.278883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.278991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.279016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.279031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.279043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.279072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.288914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.289019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.289044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.289060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.289072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.289101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.299010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.299170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.299200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.299216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.299228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.299257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.309006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.309105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.309131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.309146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.630 [2024-07-25 09:41:31.309158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.630 [2024-07-25 09:41:31.309187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.630 qpair failed and we were unable to recover it. 00:26:58.630 [2024-07-25 09:41:31.319002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.630 [2024-07-25 09:41:31.319123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.630 [2024-07-25 09:41:31.319148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.630 [2024-07-25 09:41:31.319163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.631 [2024-07-25 09:41:31.319176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.631 [2024-07-25 09:41:31.319206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-07-25 09:41:31.329026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.631 [2024-07-25 09:41:31.329126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.631 [2024-07-25 09:41:31.329150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.631 [2024-07-25 09:41:31.329164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.631 [2024-07-25 09:41:31.329176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.631 [2024-07-25 09:41:31.329206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-07-25 09:41:31.339032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.631 [2024-07-25 09:41:31.339156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.631 [2024-07-25 09:41:31.339182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.631 [2024-07-25 09:41:31.339196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.631 [2024-07-25 09:41:31.339208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.631 [2024-07-25 09:41:31.339244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-07-25 09:41:31.349118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.631 [2024-07-25 09:41:31.349226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.631 [2024-07-25 09:41:31.349252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.631 [2024-07-25 09:41:31.349267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.631 [2024-07-25 09:41:31.349279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.631 [2024-07-25 09:41:31.349310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.631 [2024-07-25 09:41:31.359093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.631 [2024-07-25 09:41:31.359243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.631 [2024-07-25 09:41:31.359268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.631 [2024-07-25 09:41:31.359283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.631 [2024-07-25 09:41:31.359295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.631 [2024-07-25 09:41:31.359325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.631 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.369169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.369273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.369299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.369313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.369325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.369363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.379185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.379281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.379306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.379321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.379333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.379373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.389208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.389304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.389335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.389351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.389373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.389408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.399238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.399365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.399391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.399405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.399418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.399448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.409264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.409379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.409405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.409419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.409431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.409471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.419316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.419458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.419484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.419498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.419510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.419540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.429311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.429423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.429449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.429463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.429476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.429512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.439320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.439448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.439474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.439488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.439500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.439530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.449422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.449547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.449573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.449587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.449600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.449629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.459425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.459512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.459537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.459551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.459564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.459593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-25 09:41:31.469440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.889 [2024-07-25 09:41:31.469528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.889 [2024-07-25 09:41:31.469554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.889 [2024-07-25 09:41:31.469569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.889 [2024-07-25 09:41:31.469581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.889 [2024-07-25 09:41:31.469611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.479477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.479567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.479597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.479613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.479625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.479655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.489521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.489611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.489636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.489650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.489662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.489692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.499560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.499647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.499672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.499686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.499698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.499728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.509580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.509708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.509734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.509748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.509760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.509790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.519597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.519694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.519719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.519734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.519751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.519782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.529600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.529688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.529714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.529729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.529742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.529772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.539659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.539762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.539787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.539801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.539813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.539843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.549661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.549771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.549795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.549810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.549822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.549852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.559689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.559824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.559848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.559862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.559874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.559904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.569735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.569842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.569867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.569882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.569894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.569925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.579817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.579943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.579969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.579983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.579995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.580025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.589790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.589891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.589915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.589929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.589942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.589972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.599813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.599922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.890 [2024-07-25 09:41:31.599949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.890 [2024-07-25 09:41:31.599963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.890 [2024-07-25 09:41:31.599975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.890 [2024-07-25 09:41:31.600005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-25 09:41:31.609841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.890 [2024-07-25 09:41:31.609947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.891 [2024-07-25 09:41:31.609973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.891 [2024-07-25 09:41:31.609993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.891 [2024-07-25 09:41:31.610006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.891 [2024-07-25 09:41:31.610036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-25 09:41:31.619906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.891 [2024-07-25 09:41:31.620022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.891 [2024-07-25 09:41:31.620048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.891 [2024-07-25 09:41:31.620063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.891 [2024-07-25 09:41:31.620075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fece0000b90 00:26:58.891 [2024-07-25 09:41:31.620104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-25 09:41:31.620235] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:58.891 A controller has encountered a failure and is being reset. 00:26:59.148 Controller properly reset. 00:26:59.148 Initializing NVMe Controllers 00:26:59.148 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:59.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:59.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:59.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:59.148 Initialization complete. Launching workers. 00:26:59.148 Starting thread on core 1 00:26:59.148 Starting thread on core 2 00:26:59.148 Starting thread on core 3 00:26:59.148 Starting thread on core 0 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:59.148 00:26:59.148 real 0m10.874s 00:26:59.148 user 0m18.388s 00:26:59.148 sys 0m5.206s 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.148 ************************************ 00:26:59.148 END TEST nvmf_target_disconnect_tc2 00:26:59.148 ************************************ 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.148 rmmod nvme_tcp 00:26:59.148 rmmod nvme_fabrics 00:26:59.148 rmmod nvme_keyring 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 630055 ']' 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 630055 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 630055 ']' 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 630055 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 630055 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 630055' 00:26:59.148 killing process with pid 630055 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 630055 00:26:59.148 09:41:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 630055 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.406 09:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.936 09:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.936 00:27:01.936 real 0m15.599s 00:27:01.936 user 0m44.893s 00:27:01.936 sys 0m7.073s 00:27:01.936 09:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.936 09:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 ************************************ 00:27:01.936 END TEST nvmf_target_disconnect 00:27:01.936 ************************************ 00:27:01.936 09:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:01.936 00:27:01.936 real 5m13.975s 00:27:01.936 user 11m5.097s 00:27:01.936 sys 1m14.493s 00:27:01.936 09:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.936 09:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 ************************************ 00:27:01.936 END TEST nvmf_host 00:27:01.936 ************************************ 00:27:01.936 00:27:01.936 real 19m47.737s 00:27:01.936 user 46m55.622s 00:27:01.936 sys 4m55.167s 00:27:01.936 09:41:34 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.936 09:41:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 ************************************ 00:27:01.936 END TEST nvmf_tcp 00:27:01.936 ************************************ 00:27:01.936 09:41:34 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:01.936 09:41:34 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:01.936 09:41:34 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.936 09:41:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.936 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 ************************************ 00:27:01.936 START TEST spdkcli_nvmf_tcp 00:27:01.936 ************************************ 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:01.936 * Looking for test storage... 00:27:01.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.936 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=631258 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 631258 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 631258 ']' 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.937 [2024-07-25 09:41:34.314103] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:27:01.937 [2024-07-25 09:41:34.314191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631258 ] 00:27:01.937 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.937 [2024-07-25 09:41:34.370188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:01.937 [2024-07-25 09:41:34.490457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.937 [2024-07-25 09:41:34.490462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.937 09:41:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:01.937 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:01.937 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:01.937 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:01.937 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:01.937 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:01.937 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:01.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:01.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:01.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:01.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:01.937 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:01.937 ' 00:27:05.217 [2024-07-25 09:41:37.230434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.782 [2024-07-25 09:41:38.466806] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:08.307 [2024-07-25 09:41:40.754034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:10.204 [2024-07-25 09:41:42.716235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:11.577 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:11.577 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:11.577 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:11.577 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:11.577 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:11.577 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:11.577 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:11.577 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:11.577 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:11.577 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:11.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:11.577 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:11.834 09:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:12.091 09:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:12.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:12.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:12.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:12.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:12.091 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:12.091 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:12.091 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:12.091 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:12.091 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:12.091 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:12.091 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:12.091 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:12.091 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:12.091 ' 00:27:17.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:17.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:17.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:17.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:17.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:17.352 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:17.352 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:17.352 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:17.352 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:17.352 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:17.352 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:17.352 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:17.352 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:17.352 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 631258 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 631258 ']' 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 631258 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.352 09:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 631258 00:27:17.352 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:17.352 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:17.352 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 631258' 00:27:17.352 killing process with pid 631258 00:27:17.352 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 631258 00:27:17.352 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 631258 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 631258 ']' 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 631258 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 631258 ']' 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 631258 00:27:17.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (631258) - No such process 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 631258 is not found' 00:27:17.610 Process with pid 631258 is not found 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:17.610 00:27:17.610 real 0m16.068s 00:27:17.610 user 0m33.890s 00:27:17.610 sys 0m0.867s 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.610 09:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.610 ************************************ 00:27:17.610 END TEST spdkcli_nvmf_tcp 00:27:17.610 ************************************ 00:27:17.610 09:41:50 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:17.610 09:41:50 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:17.610 09:41:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.610 09:41:50 -- common/autotest_common.sh@10 -- # set +x 00:27:17.610 ************************************ 00:27:17.610 START TEST nvmf_identify_passthru 00:27:17.610 ************************************ 00:27:17.610 09:41:50 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:17.868 * Looking for test storage... 00:27:17.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:17.868 09:41:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.868 09:41:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.868 09:41:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.868 09:41:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.868 09:41:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.868 09:41:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.868 09:41:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.868 09:41:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:17.868 09:41:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.868 09:41:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.868 09:41:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:17.868 09:41:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.868 09:41:50 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.868 09:41:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:19.766 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:19.766 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.766 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:19.767 Found net devices under 0000:82:00.0: cvl_0_0 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:19.767 Found net devices under 0000:82:00.1: cvl_0_1 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:27:19.767 00:27:19.767 --- 10.0.0.2 ping statistics --- 00:27:19.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.767 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:27:19.767 00:27:19.767 --- 10.0.0.1 ping statistics --- 00:27:19.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.767 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.767 09:41:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.767 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:19.767 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:19.767 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:27:20.025 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:27:20.025 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:81:00.0 00:27:20.025 09:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:81:00.0 00:27:20.025 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:81:00.0 00:27:20.025 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:81:00.0 ']' 00:27:20.025 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:27:20.025 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:20.025 09:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:20.025 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.305 09:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ951302VM2P0BGN 00:27:25.306 09:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:27:25.306 09:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:25.306 09:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:25.306 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=636134 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:30.564 09:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 636134 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 636134 ']' 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.564 09:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 [2024-07-25 09:42:02.883989] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:27:30.564 [2024-07-25 09:42:02.884085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.564 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.564 [2024-07-25 09:42:02.948841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.564 [2024-07-25 09:42:03.058687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.564 [2024-07-25 09:42:03.058738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.564 [2024-07-25 09:42:03.058751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.564 [2024-07-25 09:42:03.058763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.564 [2024-07-25 09:42:03.058772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.564 [2024-07-25 09:42:03.058865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.564 [2024-07-25 09:42:03.058936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.564 [2024-07-25 09:42:03.058996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.564 [2024-07-25 09:42:03.058993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:30.564 09:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 INFO: Log level set to 20 00:27:30.564 INFO: Requests: 00:27:30.564 { 00:27:30.564 "jsonrpc": "2.0", 00:27:30.564 "method": "nvmf_set_config", 00:27:30.564 "id": 1, 00:27:30.564 "params": { 00:27:30.564 "admin_cmd_passthru": { 00:27:30.564 "identify_ctrlr": true 00:27:30.564 } 00:27:30.564 } 00:27:30.564 } 00:27:30.564 00:27:30.564 INFO: response: 00:27:30.564 { 00:27:30.564 "jsonrpc": "2.0", 00:27:30.564 "id": 1, 00:27:30.564 "result": true 00:27:30.564 } 00:27:30.564 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.564 09:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 INFO: Setting log level to 20 00:27:30.564 INFO: Setting log level to 20 00:27:30.564 INFO: Log level set to 20 00:27:30.564 INFO: Log level set to 20 00:27:30.564 INFO: Requests: 00:27:30.564 { 00:27:30.564 "jsonrpc": "2.0", 00:27:30.564 "method": "framework_start_init", 00:27:30.564 "id": 1 00:27:30.564 } 00:27:30.564 00:27:30.564 INFO: Requests: 00:27:30.564 { 00:27:30.564 "jsonrpc": "2.0", 00:27:30.564 "method": "framework_start_init", 00:27:30.564 "id": 1 00:27:30.564 } 00:27:30.564 00:27:30.564 [2024-07-25 09:42:03.197568] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:30.564 INFO: response: 00:27:30.564 { 00:27:30.564 "jsonrpc": "2.0", 00:27:30.564 "id": 1, 00:27:30.564 "result": true 00:27:30.564 } 00:27:30.564 00:27:30.564 INFO: response: 00:27:30.564 { 00:27:30.564 "jsonrpc": "2.0", 00:27:30.564 "id": 1, 00:27:30.564 "result": true 00:27:30.564 } 00:27:30.564 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.564 09:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 INFO: Setting log level to 40 00:27:30.564 INFO: Setting log level to 40 00:27:30.564 INFO: Setting log level to 40 00:27:30.564 [2024-07-25 09:42:03.207600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.564 09:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.564 09:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:81:00.0 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.564 09:42:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 Nvme0n1 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 [2024-07-25 09:42:06.093287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 [ 00:27:33.843 { 00:27:33.843 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:33.843 "subtype": "Discovery", 00:27:33.843 "listen_addresses": [], 00:27:33.843 "allow_any_host": true, 00:27:33.843 "hosts": [] 00:27:33.843 }, 00:27:33.843 { 00:27:33.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.843 "subtype": "NVMe", 00:27:33.843 "listen_addresses": [ 00:27:33.843 { 00:27:33.843 "trtype": "TCP", 00:27:33.843 "adrfam": "IPv4", 00:27:33.843 "traddr": "10.0.0.2", 00:27:33.843 "trsvcid": "4420" 00:27:33.843 } 00:27:33.843 ], 00:27:33.843 "allow_any_host": true, 00:27:33.843 "hosts": [], 00:27:33.843 "serial_number": "SPDK00000000000001", 00:27:33.843 "model_number": "SPDK bdev Controller", 00:27:33.843 "max_namespaces": 1, 00:27:33.843 "min_cntlid": 1, 00:27:33.843 "max_cntlid": 65519, 00:27:33.843 "namespaces": [ 00:27:33.843 { 00:27:33.843 "nsid": 1, 00:27:33.843 "bdev_name": "Nvme0n1", 00:27:33.843 "name": "Nvme0n1", 00:27:33.843 "nguid": "11411A88AA8C4673BE620CA351E0E509", 00:27:33.843 "uuid": "11411a88-aa8c-4673-be62-0ca351e0e509" 00:27:33.843 } 00:27:33.843 ] 00:27:33.843 } 00:27:33.843 ] 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:33.843 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ951302VM2P0BGN 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:33.843 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ951302VM2P0BGN '!=' PHLJ951302VM2P0BGN ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:33.843 09:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.843 rmmod nvme_tcp 00:27:33.843 rmmod nvme_fabrics 00:27:33.843 rmmod nvme_keyring 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 636134 ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 636134 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 636134 ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 636134 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 636134 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 636134' 00:27:33.843 killing process with pid 636134 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 636134 00:27:33.843 09:42:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 636134 00:27:36.368 09:42:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.368 09:42:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.368 09:42:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.368 09:42:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.368 09:42:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.368 09:42:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.368 09:42:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:36.368 09:42:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.265 09:42:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.265 00:27:38.265 real 0m20.659s 00:27:38.265 user 0m31.448s 00:27:38.265 sys 0m2.494s 00:27:38.265 09:42:10 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.265 09:42:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:38.265 ************************************ 00:27:38.265 END TEST nvmf_identify_passthru 00:27:38.265 ************************************ 00:27:38.265 09:42:10 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:38.522 09:42:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:38.522 09:42:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.522 09:42:10 -- common/autotest_common.sh@10 -- # set +x 00:27:38.522 ************************************ 00:27:38.522 START TEST nvmf_dif 00:27:38.522 ************************************ 00:27:38.522 09:42:11 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:38.522 * Looking for test storage... 00:27:38.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.522 09:42:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.522 09:42:11 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.522 09:42:11 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.522 09:42:11 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.522 09:42:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.522 09:42:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.522 09:42:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.522 09:42:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:38.522 09:42:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.522 09:42:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:38.522 09:42:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:38.522 09:42:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:38.522 09:42:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:38.522 09:42:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.522 09:42:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:38.522 09:42:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.522 09:42:11 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.522 09:42:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:40.426 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:40.426 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:40.426 Found net devices under 0000:82:00.0: cvl_0_0 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:40.426 Found net devices under 0000:82:00.1: cvl_0_1 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.426 09:42:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.426 09:42:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.426 09:42:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.427 09:42:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:27:40.427 00:27:40.427 --- 10.0.0.2 ping statistics --- 00:27:40.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.427 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:27:40.427 09:42:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:27:40.427 00:27:40.427 --- 10.0.0.1 ping statistics --- 00:27:40.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.427 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:27:40.427 09:42:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.427 09:42:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:40.427 09:42:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:40.427 09:42:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:42.046 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:42.046 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:42.046 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:42.046 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:42.046 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:42.046 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:42.046 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:42.046 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:42.046 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:42.046 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:42.046 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:42.046 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:42.046 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:42.046 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:42.046 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:42.046 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:42.046 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.046 09:42:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:42.046 09:42:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=640024 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:42.046 09:42:14 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 640024 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 640024 ']' 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.046 09:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.046 [2024-07-25 09:42:14.494990] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:27:42.046 [2024-07-25 09:42:14.495062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.046 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.046 [2024-07-25 09:42:14.560155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.046 [2024-07-25 09:42:14.668598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.046 [2024-07-25 09:42:14.668653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.046 [2024-07-25 09:42:14.668666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.046 [2024-07-25 09:42:14.668678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.046 [2024-07-25 09:42:14.668687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.046 [2024-07-25 09:42:14.668714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:42.305 09:42:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 09:42:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.305 09:42:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:42.305 09:42:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 [2024-07-25 09:42:14.815095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.305 09:42:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 ************************************ 00:27:42.305 START TEST fio_dif_1_default 00:27:42.305 ************************************ 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 bdev_null0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.305 [2024-07-25 09:42:14.871379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.305 { 00:27:42.305 "params": { 00:27:42.305 "name": "Nvme$subsystem", 00:27:42.305 "trtype": "$TEST_TRANSPORT", 00:27:42.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.305 "adrfam": "ipv4", 00:27:42.305 "trsvcid": "$NVMF_PORT", 00:27:42.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.305 "hdgst": ${hdgst:-false}, 00:27:42.305 "ddgst": ${ddgst:-false} 00:27:42.305 }, 00:27:42.305 "method": "bdev_nvme_attach_controller" 00:27:42.305 } 00:27:42.305 EOF 00:27:42.305 )") 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.305 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.306 "params": { 00:27:42.306 "name": "Nvme0", 00:27:42.306 "trtype": "tcp", 00:27:42.306 "traddr": "10.0.0.2", 00:27:42.306 "adrfam": "ipv4", 00:27:42.306 "trsvcid": "4420", 00:27:42.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.306 "hdgst": false, 00:27:42.306 "ddgst": false 00:27:42.306 }, 00:27:42.306 "method": "bdev_nvme_attach_controller" 00:27:42.306 }' 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:42.306 09:42:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.563 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:42.563 fio-3.35 00:27:42.563 Starting 1 thread 00:27:42.563 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.751 00:27:54.751 filename0: (groupid=0, jobs=1): err= 0: pid=640252: Thu Jul 25 09:42:25 2024 00:27:54.751 read: IOPS=190, BW=762KiB/s (781kB/s)(7632KiB/10013msec) 00:27:54.751 slat (usec): min=5, max=246, avg= 9.45, stdev= 5.94 00:27:54.751 clat (usec): min=526, max=42394, avg=20960.31, stdev=20371.97 00:27:54.751 lat (usec): min=533, max=42407, avg=20969.77, stdev=20371.81 00:27:54.751 clat percentiles (usec): 00:27:54.751 | 1.00th=[ 545], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 00:27:54.751 | 30.00th=[ 627], 40.00th=[ 644], 50.00th=[ 4490], 60.00th=[41157], 00:27:54.751 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:54.751 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:54.751 | 99.99th=[42206] 00:27:54.751 bw ( KiB/s): min= 704, max= 768, per=99.84%, avg=761.60, stdev=19.70, samples=20 00:27:54.751 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:27:54.751 lat (usec) : 750=49.42%, 1000=0.47% 00:27:54.751 lat (msec) : 10=0.21%, 50=49.90% 00:27:54.751 cpu : usr=90.16%, sys=9.52%, ctx=32, majf=0, minf=264 00:27:54.751 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.751 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.752 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:54.752 00:27:54.752 Run status group 0 (all jobs): 00:27:54.752 READ: bw=762KiB/s (781kB/s), 762KiB/s-762KiB/s (781kB/s-781kB/s), io=7632KiB (7815kB), run=10013-10013msec 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 00:27:54.752 real 0m11.276s 00:27:54.752 user 0m10.356s 00:27:54.752 sys 0m1.231s 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 ************************************ 00:27:54.752 END TEST fio_dif_1_default 00:27:54.752 ************************************ 00:27:54.752 09:42:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:54.752 09:42:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:54.752 09:42:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 ************************************ 00:27:54.752 START TEST fio_dif_1_multi_subsystems 00:27:54.752 ************************************ 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 bdev_null0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 [2024-07-25 09:42:26.198872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 bdev_null1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.752 { 00:27:54.752 "params": { 00:27:54.752 "name": "Nvme$subsystem", 00:27:54.752 "trtype": "$TEST_TRANSPORT", 00:27:54.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.752 "adrfam": "ipv4", 00:27:54.752 "trsvcid": "$NVMF_PORT", 00:27:54.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.752 "hdgst": ${hdgst:-false}, 00:27:54.752 "ddgst": ${ddgst:-false} 00:27:54.752 }, 00:27:54.752 "method": "bdev_nvme_attach_controller" 00:27:54.752 } 00:27:54.752 EOF 00:27:54.752 )") 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.752 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.752 { 00:27:54.752 "params": { 00:27:54.752 "name": "Nvme$subsystem", 00:27:54.752 "trtype": "$TEST_TRANSPORT", 00:27:54.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.752 "adrfam": "ipv4", 00:27:54.752 "trsvcid": "$NVMF_PORT", 00:27:54.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.752 "hdgst": ${hdgst:-false}, 00:27:54.752 "ddgst": ${ddgst:-false} 00:27:54.752 }, 00:27:54.752 "method": "bdev_nvme_attach_controller" 00:27:54.752 } 00:27:54.752 EOF 00:27:54.752 )") 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.753 "params": { 00:27:54.753 "name": "Nvme0", 00:27:54.753 "trtype": "tcp", 00:27:54.753 "traddr": "10.0.0.2", 00:27:54.753 "adrfam": "ipv4", 00:27:54.753 "trsvcid": "4420", 00:27:54.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.753 "hdgst": false, 00:27:54.753 "ddgst": false 00:27:54.753 }, 00:27:54.753 "method": "bdev_nvme_attach_controller" 00:27:54.753 },{ 00:27:54.753 "params": { 00:27:54.753 "name": "Nvme1", 00:27:54.753 "trtype": "tcp", 00:27:54.753 "traddr": "10.0.0.2", 00:27:54.753 "adrfam": "ipv4", 00:27:54.753 "trsvcid": "4420", 00:27:54.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.753 "hdgst": false, 00:27:54.753 "ddgst": false 00:27:54.753 }, 00:27:54.753 "method": "bdev_nvme_attach_controller" 00:27:54.753 }' 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:54.753 09:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.753 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.753 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.753 fio-3.35 00:27:54.753 Starting 2 threads 00:27:54.753 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.715 00:28:04.715 filename0: (groupid=0, jobs=1): err= 0: pid=641657: Thu Jul 25 09:42:37 2024 00:28:04.715 read: IOPS=199, BW=800KiB/s (819kB/s)(8016KiB/10022msec) 00:28:04.715 slat (nsec): min=5357, max=51843, avg=10408.43, stdev=5346.64 00:28:04.715 clat (usec): min=533, max=42494, avg=19969.73, stdev=20425.20 00:28:04.715 lat (usec): min=540, max=42521, avg=19980.14, stdev=20423.94 00:28:04.715 clat percentiles (usec): 00:28:04.715 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 578], 00:28:04.715 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 717], 60.00th=[41157], 00:28:04.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:28:04.715 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:04.715 | 99.99th=[42730] 00:28:04.715 bw ( KiB/s): min= 768, max= 896, per=67.36%, avg=800.00, stdev=44.05, samples=20 00:28:04.716 iops : min= 192, max= 224, avg=200.00, stdev=11.01, samples=20 00:28:04.716 lat (usec) : 750=50.35%, 1000=2.10% 00:28:04.716 lat (msec) : 2=0.05%, 10=0.20%, 50=47.31% 00:28:04.716 cpu : usr=97.53%, sys=2.18%, ctx=16, majf=0, minf=132 00:28:04.716 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.716 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.716 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.716 filename1: (groupid=0, jobs=1): err= 0: pid=641658: Thu Jul 25 09:42:37 2024 00:28:04.716 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10008msec) 00:28:04.716 slat (nsec): min=4744, max=72923, avg=11949.10, stdev=6288.75 00:28:04.716 clat (usec): min=547, max=46318, avg=41314.05, stdev=3756.78 00:28:04.716 lat (usec): min=555, max=46339, avg=41325.99, stdev=3756.31 00:28:04.716 clat percentiles (usec): 00:28:04.716 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:04.716 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:28:04.716 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:04.716 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:28:04.716 | 99.99th=[46400] 00:28:04.716 bw ( KiB/s): min= 352, max= 416, per=32.46%, avg=385.60, stdev=16.33, samples=20 00:28:04.716 iops : min= 88, max= 104, avg=96.40, stdev= 4.08, samples=20 00:28:04.716 lat (usec) : 750=0.83% 00:28:04.716 lat (msec) : 50=99.17% 00:28:04.716 cpu : usr=97.48%, sys=2.21%, ctx=23, majf=0, minf=199 00:28:04.716 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.716 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.716 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.716 00:28:04.716 Run status group 0 (all jobs): 00:28:04.716 READ: bw=1186KiB/s (1215kB/s), 387KiB/s-800KiB/s (396kB/s-819kB/s), io=11.6MiB (12.2MB), run=10008-10022msec 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.974 00:28:04.974 real 0m11.482s 00:28:04.974 user 0m20.905s 00:28:04.974 sys 0m0.770s 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 ************************************ 00:28:04.974 END TEST fio_dif_1_multi_subsystems 00:28:04.974 ************************************ 00:28:04.974 09:42:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:04.974 09:42:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:04.974 09:42:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 ************************************ 00:28:04.974 START TEST fio_dif_rand_params 00:28:04.974 ************************************ 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.974 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.232 bdev_null0 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.232 [2024-07-25 09:42:37.735987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.232 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.232 { 00:28:05.232 "params": { 00:28:05.232 "name": "Nvme$subsystem", 00:28:05.232 "trtype": "$TEST_TRANSPORT", 00:28:05.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.233 "adrfam": "ipv4", 00:28:05.233 "trsvcid": "$NVMF_PORT", 00:28:05.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.233 "hdgst": ${hdgst:-false}, 00:28:05.233 "ddgst": ${ddgst:-false} 00:28:05.233 }, 00:28:05.233 "method": "bdev_nvme_attach_controller" 00:28:05.233 } 00:28:05.233 EOF 00:28:05.233 )") 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.233 "params": { 00:28:05.233 "name": "Nvme0", 00:28:05.233 "trtype": "tcp", 00:28:05.233 "traddr": "10.0.0.2", 00:28:05.233 "adrfam": "ipv4", 00:28:05.233 "trsvcid": "4420", 00:28:05.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:05.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:05.233 "hdgst": false, 00:28:05.233 "ddgst": false 00:28:05.233 }, 00:28:05.233 "method": "bdev_nvme_attach_controller" 00:28:05.233 }' 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:05.233 09:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.491 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:05.491 ... 00:28:05.491 fio-3.35 00:28:05.491 Starting 3 threads 00:28:05.491 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.048 00:28:12.048 filename0: (groupid=0, jobs=1): err= 0: pid=643060: Thu Jul 25 09:42:43 2024 00:28:12.048 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(140MiB/5048msec) 00:28:12.048 slat (nsec): min=5223, max=39622, avg=20559.65, stdev=4827.08 00:28:12.048 clat (usec): min=7060, max=51145, avg=13446.26, stdev=2279.52 00:28:12.048 lat (usec): min=7081, max=51167, avg=13466.82, stdev=2279.77 00:28:12.048 clat percentiles (usec): 00:28:12.048 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:28:12.048 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13698], 00:28:12.048 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15664], 95.00th=[16450], 00:28:12.048 | 99.00th=[17433], 99.50th=[18220], 99.90th=[50070], 99.95th=[51119], 00:28:12.048 | 99.99th=[51119] 00:28:12.048 bw ( KiB/s): min=26112, max=31488, per=34.64%, avg=28620.80, stdev=1636.09, samples=10 00:28:12.048 iops : min= 204, max= 246, avg=223.60, stdev=12.78, samples=10 00:28:12.048 lat (msec) : 10=2.23%, 20=97.59%, 100=0.18% 00:28:12.048 cpu : usr=94.69%, sys=4.68%, ctx=26, majf=0, minf=108 00:28:12.048 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.048 issued rwts: total=1121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.048 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:12.048 filename0: (groupid=0, jobs=1): err= 0: pid=643061: Thu Jul 25 09:42:43 2024 00:28:12.048 read: IOPS=190, BW=23.9MiB/s (25.0MB/s)(120MiB/5046msec) 00:28:12.048 slat (nsec): min=4729, max=69737, avg=17899.44, stdev=5869.62 00:28:12.048 clat (usec): min=6918, max=51344, avg=15654.07, stdev=2778.41 00:28:12.048 lat (usec): min=6932, max=51359, avg=15671.97, stdev=2778.90 00:28:12.048 clat percentiles (usec): 00:28:12.048 | 1.00th=[ 8356], 5.00th=[11863], 10.00th=[12649], 20.00th=[13698], 00:28:12.048 | 30.00th=[14615], 40.00th=[15533], 50.00th=[16057], 60.00th=[16581], 00:28:12.048 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:28:12.048 | 99.00th=[19268], 99.50th=[24249], 99.90th=[51119], 99.95th=[51119], 00:28:12.048 | 99.99th=[51119] 00:28:12.048 bw ( KiB/s): min=23296, max=26933, per=29.75%, avg=24581.30, stdev=1299.10, samples=10 00:28:12.048 iops : min= 182, max= 210, avg=192.00, stdev=10.07, samples=10 00:28:12.048 lat (msec) : 10=3.01%, 20=96.37%, 50=0.52%, 100=0.10% 00:28:12.048 cpu : usr=86.48%, sys=7.87%, ctx=412, majf=0, minf=113 00:28:12.048 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.048 issued rwts: total=963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.048 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:12.048 filename0: (groupid=0, jobs=1): err= 0: pid=643062: Thu Jul 25 09:42:43 2024 00:28:12.048 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5047msec) 00:28:12.048 slat (usec): min=5, max=257, avg=18.72, stdev= 8.88 00:28:12.048 clat (usec): min=9450, max=55140, avg=12834.43, stdev=4069.19 00:28:12.048 lat (usec): min=9465, max=55156, avg=12853.15, stdev=4068.88 00:28:12.048 clat percentiles (usec): 00:28:12.048 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:28:12.048 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:28:12.048 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:28:12.048 | 99.00th=[16188], 99.50th=[53740], 99.90th=[54789], 99.95th=[55313], 00:28:12.048 | 99.99th=[55313] 00:28:12.048 bw ( KiB/s): min=24576, max=31488, per=36.32%, avg=30003.20, stdev=2103.43, samples=10 00:28:12.048 iops : min= 192, max= 246, avg=234.40, stdev=16.43, samples=10 00:28:12.048 lat (msec) : 10=0.77%, 20=98.30%, 100=0.94% 00:28:12.048 cpu : usr=94.89%, sys=4.58%, ctx=10, majf=0, minf=128 00:28:12.048 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.048 issued rwts: total=1174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.048 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:12.048 00:28:12.048 Run status group 0 (all jobs): 00:28:12.048 READ: bw=80.7MiB/s (84.6MB/s), 23.9MiB/s-29.1MiB/s (25.0MB/s-30.5MB/s), io=407MiB (427MB), run=5046-5048msec 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:12.048 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 bdev_null0 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 [2024-07-25 09:42:43.810773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 bdev_null1 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 bdev_null2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.049 { 00:28:12.049 "params": { 00:28:12.049 "name": "Nvme$subsystem", 00:28:12.049 "trtype": "$TEST_TRANSPORT", 00:28:12.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.049 "adrfam": "ipv4", 00:28:12.049 "trsvcid": "$NVMF_PORT", 00:28:12.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.049 "hdgst": ${hdgst:-false}, 00:28:12.049 "ddgst": ${ddgst:-false} 00:28:12.049 }, 00:28:12.049 "method": "bdev_nvme_attach_controller" 00:28:12.049 } 00:28:12.049 EOF 00:28:12.049 )") 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.049 { 00:28:12.049 "params": { 00:28:12.049 "name": "Nvme$subsystem", 00:28:12.049 "trtype": "$TEST_TRANSPORT", 00:28:12.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.049 "adrfam": "ipv4", 00:28:12.049 "trsvcid": "$NVMF_PORT", 00:28:12.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.049 "hdgst": ${hdgst:-false}, 00:28:12.049 "ddgst": ${ddgst:-false} 00:28:12.049 }, 00:28:12.049 "method": "bdev_nvme_attach_controller" 00:28:12.049 } 00:28:12.049 EOF 00:28:12.049 )") 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:12.049 09:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.050 { 00:28:12.050 "params": { 00:28:12.050 "name": "Nvme$subsystem", 00:28:12.050 "trtype": "$TEST_TRANSPORT", 00:28:12.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.050 "adrfam": "ipv4", 00:28:12.050 "trsvcid": "$NVMF_PORT", 00:28:12.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.050 "hdgst": ${hdgst:-false}, 00:28:12.050 "ddgst": ${ddgst:-false} 00:28:12.050 }, 00:28:12.050 "method": "bdev_nvme_attach_controller" 00:28:12.050 } 00:28:12.050 EOF 00:28:12.050 )") 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.050 "params": { 00:28:12.050 "name": "Nvme0", 00:28:12.050 "trtype": "tcp", 00:28:12.050 "traddr": "10.0.0.2", 00:28:12.050 "adrfam": "ipv4", 00:28:12.050 "trsvcid": "4420", 00:28:12.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:12.050 "hdgst": false, 00:28:12.050 "ddgst": false 00:28:12.050 }, 00:28:12.050 "method": "bdev_nvme_attach_controller" 00:28:12.050 },{ 00:28:12.050 "params": { 00:28:12.050 "name": "Nvme1", 00:28:12.050 "trtype": "tcp", 00:28:12.050 "traddr": "10.0.0.2", 00:28:12.050 "adrfam": "ipv4", 00:28:12.050 "trsvcid": "4420", 00:28:12.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.050 "hdgst": false, 00:28:12.050 "ddgst": false 00:28:12.050 }, 00:28:12.050 "method": "bdev_nvme_attach_controller" 00:28:12.050 },{ 00:28:12.050 "params": { 00:28:12.050 "name": "Nvme2", 00:28:12.050 "trtype": "tcp", 00:28:12.050 "traddr": "10.0.0.2", 00:28:12.050 "adrfam": "ipv4", 00:28:12.050 "trsvcid": "4420", 00:28:12.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.050 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.050 "hdgst": false, 00:28:12.050 "ddgst": false 00:28:12.050 }, 00:28:12.050 "method": "bdev_nvme_attach_controller" 00:28:12.050 }' 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:12.050 09:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.050 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:12.050 ... 00:28:12.050 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:12.050 ... 00:28:12.050 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:12.050 ... 00:28:12.050 fio-3.35 00:28:12.050 Starting 24 threads 00:28:12.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.236 00:28:24.236 filename0: (groupid=0, jobs=1): err= 0: pid=643917: Thu Jul 25 09:42:55 2024 00:28:24.236 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.8MiB/10030msec) 00:28:24.236 slat (usec): min=6, max=206, avg=30.21, stdev=18.66 00:28:24.236 clat (usec): min=13353, max=45093, avg=33156.12, stdev=1665.11 00:28:24.236 lat (usec): min=13538, max=45151, avg=33186.33, stdev=1663.15 00:28:24.236 clat percentiles (usec): 00:28:24.236 | 1.00th=[28705], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.236 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.236 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.236 | 99.00th=[36439], 99.50th=[37487], 99.90th=[44827], 99.95th=[44827], 00:28:24.236 | 99.99th=[45351] 00:28:24.236 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.60, stdev=28.62, samples=20 00:28:24.236 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:28:24.236 lat (msec) : 20=0.38%, 50=99.63% 00:28:24.236 cpu : usr=97.76%, sys=1.61%, ctx=70, majf=0, minf=51 00:28:24.236 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.236 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.236 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.236 filename0: (groupid=0, jobs=1): err= 0: pid=643918: Thu Jul 25 09:42:55 2024 00:28:24.236 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:28:24.236 slat (nsec): min=9099, max=70475, avg=32813.06, stdev=8927.04 00:28:24.236 clat (usec): min=17759, max=53416, avg=33276.21, stdev=1771.55 00:28:24.236 lat (usec): min=17783, max=53445, avg=33309.02, stdev=1772.09 00:28:24.236 clat percentiles (usec): 00:28:24.236 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.236 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.236 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.236 | 99.00th=[37487], 99.50th=[44827], 99.90th=[53216], 99.95th=[53216], 00:28:24.236 | 99.99th=[53216] 00:28:24.236 bw ( KiB/s): min= 1667, max= 1920, per=4.12%, avg=1900.95, stdev=62.04, samples=20 00:28:24.236 iops : min= 416, max= 480, avg=475.20, stdev=15.66, samples=20 00:28:24.236 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:28:24.236 cpu : usr=97.70%, sys=1.92%, ctx=17, majf=0, minf=38 00:28:24.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.236 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.236 filename0: (groupid=0, jobs=1): err= 0: pid=643919: Thu Jul 25 09:42:55 2024 00:28:24.236 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10028msec) 00:28:24.236 slat (usec): min=8, max=120, avg=43.89, stdev=19.87 00:28:24.236 clat (usec): min=12602, max=45129, avg=33041.26, stdev=1727.82 00:28:24.236 lat (usec): min=12649, max=45157, avg=33085.14, stdev=1725.66 00:28:24.236 clat percentiles (usec): 00:28:24.236 | 1.00th=[28181], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:24.236 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:28:24.236 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.236 | 99.00th=[36439], 99.50th=[37487], 99.90th=[44827], 99.95th=[44827], 00:28:24.236 | 99.99th=[45351] 00:28:24.236 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.60, stdev=28.62, samples=20 00:28:24.236 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:28:24.236 lat (msec) : 20=0.33%, 50=99.67% 00:28:24.236 cpu : usr=97.69%, sys=1.68%, ctx=48, majf=0, minf=38 00:28:24.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.236 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.236 filename0: (groupid=0, jobs=1): err= 0: pid=643920: Thu Jul 25 09:42:55 2024 00:28:24.236 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.7MiB/10024msec) 00:28:24.236 slat (usec): min=8, max=113, avg=20.79, stdev=14.90 00:28:24.236 clat (usec): min=19698, max=42789, avg=33359.44, stdev=1176.93 00:28:24.236 lat (usec): min=19735, max=42815, avg=33380.23, stdev=1173.13 00:28:24.236 clat percentiles (usec): 00:28:24.236 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:28:24.237 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[36963], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:28:24.237 | 99.99th=[42730] 00:28:24.237 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1907.20, stdev=39.40, samples=20 00:28:24.237 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:28:24.237 lat (msec) : 20=0.04%, 50=99.96% 00:28:24.237 cpu : usr=97.62%, sys=1.82%, ctx=39, majf=0, minf=49 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename0: (groupid=0, jobs=1): err= 0: pid=643921: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:28:24.237 slat (nsec): min=12425, max=78742, avg=34187.63, stdev=8997.26 00:28:24.237 clat (usec): min=23207, max=46178, avg=33270.72, stdev=1177.19 00:28:24.237 lat (usec): min=23267, max=46230, avg=33304.91, stdev=1177.04 00:28:24.237 clat percentiles (usec): 00:28:24.237 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.237 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[37487], 99.50th=[41681], 99.90th=[44827], 99.95th=[45351], 00:28:24.237 | 99.99th=[46400] 00:28:24.237 bw ( KiB/s): min= 1792, max= 1920, per=4.12%, avg=1900.80, stdev=46.89, samples=20 00:28:24.237 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:28:24.237 lat (msec) : 50=100.00% 00:28:24.237 cpu : usr=97.95%, sys=1.53%, ctx=34, majf=0, minf=53 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename0: (groupid=0, jobs=1): err= 0: pid=643922: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:28:24.237 slat (usec): min=10, max=119, avg=38.67, stdev=15.62 00:28:24.237 clat (usec): min=12003, max=84200, avg=33254.85, stdev=2996.43 00:28:24.237 lat (usec): min=12035, max=84237, avg=33293.52, stdev=2995.58 00:28:24.237 clat percentiles (usec): 00:28:24.237 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.237 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[40109], 99.50th=[42730], 99.90th=[74974], 99.95th=[74974], 00:28:24.237 | 99.99th=[84411] 00:28:24.237 bw ( KiB/s): min= 1667, max= 2039, per=4.12%, avg=1900.50, stdev=73.75, samples=20 00:28:24.237 iops : min= 416, max= 509, avg=475.05, stdev=18.49, samples=20 00:28:24.237 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:28:24.237 cpu : usr=97.28%, sys=1.84%, ctx=144, majf=0, minf=40 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename0: (groupid=0, jobs=1): err= 0: pid=643923: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:28:24.237 slat (usec): min=8, max=102, avg=34.12, stdev=10.78 00:28:24.237 clat (usec): min=12133, max=75117, avg=33290.23, stdev=2948.70 00:28:24.237 lat (usec): min=12156, max=75135, avg=33324.35, stdev=2947.24 00:28:24.237 clat percentiles (usec): 00:28:24.237 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.237 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[40109], 99.50th=[42730], 99.90th=[74974], 99.95th=[74974], 00:28:24.237 | 99.99th=[74974] 00:28:24.237 bw ( KiB/s): min= 1667, max= 2039, per=4.12%, avg=1900.50, stdev=73.75, samples=20 00:28:24.237 iops : min= 416, max= 509, avg=475.05, stdev=18.49, samples=20 00:28:24.237 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:28:24.237 cpu : usr=98.09%, sys=1.46%, ctx=27, majf=0, minf=38 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename0: (groupid=0, jobs=1): err= 0: pid=643924: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10016msec) 00:28:24.237 slat (nsec): min=8305, max=98079, avg=30194.43, stdev=14847.62 00:28:24.237 clat (usec): min=31009, max=46925, avg=33347.22, stdev=1240.64 00:28:24.237 lat (usec): min=31022, max=46958, avg=33377.42, stdev=1240.11 00:28:24.237 clat percentiles (usec): 00:28:24.237 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.237 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[38011], 99.50th=[44827], 99.90th=[46924], 99.95th=[46924], 00:28:24.237 | 99.99th=[46924] 00:28:24.237 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1900.80, stdev=62.64, samples=20 00:28:24.237 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:28:24.237 lat (msec) : 50=100.00% 00:28:24.237 cpu : usr=95.96%, sys=2.49%, ctx=277, majf=0, minf=24 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename1: (groupid=0, jobs=1): err= 0: pid=643925: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10028msec) 00:28:24.237 slat (usec): min=8, max=107, avg=32.13, stdev=12.89 00:28:24.237 clat (usec): min=12454, max=45124, avg=33146.60, stdev=1731.15 00:28:24.237 lat (usec): min=12502, max=45150, avg=33178.73, stdev=1728.84 00:28:24.237 clat percentiles (usec): 00:28:24.237 | 1.00th=[28181], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.237 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[36963], 99.50th=[38011], 99.90th=[44827], 99.95th=[44827], 00:28:24.237 | 99.99th=[45351] 00:28:24.237 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.60, stdev=28.62, samples=20 00:28:24.237 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:28:24.237 lat (msec) : 20=0.33%, 50=99.67% 00:28:24.237 cpu : usr=96.92%, sys=2.10%, ctx=170, majf=0, minf=37 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename1: (groupid=0, jobs=1): err= 0: pid=643926: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:28:24.237 slat (nsec): min=7379, max=97334, avg=32557.01, stdev=10776.06 00:28:24.237 clat (usec): min=17660, max=51970, avg=33264.91, stdev=1726.15 00:28:24.237 lat (usec): min=17669, max=51988, avg=33297.47, stdev=1726.47 00:28:24.237 clat percentiles (usec): 00:28:24.237 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.237 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.237 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.237 | 99.00th=[37487], 99.50th=[44827], 99.90th=[52167], 99.95th=[52167], 00:28:24.237 | 99.99th=[52167] 00:28:24.237 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1899.79, stdev=64.19, samples=19 00:28:24.237 iops : min= 416, max= 480, avg=474.95, stdev=16.05, samples=19 00:28:24.237 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:28:24.237 cpu : usr=97.09%, sys=2.08%, ctx=172, majf=0, minf=39 00:28:24.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.237 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.237 filename1: (groupid=0, jobs=1): err= 0: pid=643927: Thu Jul 25 09:42:55 2024 00:28:24.237 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10016msec) 00:28:24.238 slat (nsec): min=8372, max=94380, avg=31312.09, stdev=15028.45 00:28:24.238 clat (usec): min=22205, max=49776, avg=33328.67, stdev=1312.63 00:28:24.238 lat (usec): min=22236, max=49827, avg=33359.98, stdev=1311.89 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.238 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.238 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.238 | 99.00th=[38011], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:28:24.238 | 99.99th=[49546] 00:28:24.238 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1900.80, stdev=62.64, samples=20 00:28:24.238 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:28:24.238 lat (msec) : 50=100.00% 00:28:24.238 cpu : usr=96.90%, sys=2.10%, ctx=95, majf=0, minf=25 00:28:24.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename1: (groupid=0, jobs=1): err= 0: pid=643928: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:28:24.238 slat (nsec): min=8203, max=93272, avg=27752.24, stdev=11892.48 00:28:24.238 clat (usec): min=15261, max=61096, avg=33330.07, stdev=2158.20 00:28:24.238 lat (usec): min=15275, max=61128, avg=33357.83, stdev=2158.97 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.238 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:28:24.238 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.238 | 99.00th=[38011], 99.50th=[44303], 99.90th=[61080], 99.95th=[61080], 00:28:24.238 | 99.99th=[61080] 00:28:24.238 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1900.40, stdev=62.53, samples=20 00:28:24.238 iops : min= 416, max= 480, avg=475.10, stdev=15.63, samples=20 00:28:24.238 lat (msec) : 20=0.38%, 50=99.29%, 100=0.34% 00:28:24.238 cpu : usr=95.59%, sys=2.68%, ctx=773, majf=0, minf=34 00:28:24.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename1: (groupid=0, jobs=1): err= 0: pid=643929: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:28:24.238 slat (usec): min=13, max=220, avg=51.54, stdev=23.23 00:28:24.238 clat (usec): min=28137, max=45049, avg=33121.09, stdev=1186.31 00:28:24.238 lat (usec): min=28178, max=45071, avg=33172.64, stdev=1179.55 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:24.238 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:28:24.238 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.238 | 99.00th=[37487], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:28:24.238 | 99.99th=[44827] 00:28:24.238 bw ( KiB/s): min= 1792, max= 1920, per=4.12%, avg=1900.80, stdev=46.89, samples=20 00:28:24.238 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:28:24.238 lat (msec) : 50=100.00% 00:28:24.238 cpu : usr=97.80%, sys=1.67%, ctx=34, majf=0, minf=31 00:28:24.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename1: (groupid=0, jobs=1): err= 0: pid=643930: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10012msec) 00:28:24.238 slat (nsec): min=8128, max=75576, avg=14578.65, stdev=9307.72 00:28:24.238 clat (usec): min=9433, max=75108, avg=28363.11, stdev=6920.59 00:28:24.238 lat (usec): min=9441, max=75126, avg=28377.69, stdev=6921.20 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[16909], 5.00th=[21627], 10.00th=[21627], 20.00th=[21890], 00:28:24.238 | 30.00th=[23987], 40.00th=[25035], 50.00th=[27657], 60.00th=[28967], 00:28:24.238 | 70.00th=[32900], 80.00th=[33162], 90.00th=[37487], 95.00th=[41157], 00:28:24.238 | 99.00th=[45351], 99.50th=[45876], 99.90th=[74974], 99.95th=[74974], 00:28:24.238 | 99.99th=[74974] 00:28:24.238 bw ( KiB/s): min= 1587, max= 2576, per=4.88%, avg=2248.45, stdev=213.56, samples=20 00:28:24.238 iops : min= 396, max= 644, avg=562.05, stdev=53.49, samples=20 00:28:24.238 lat (msec) : 10=0.11%, 20=2.13%, 50=97.48%, 100=0.28% 00:28:24.238 cpu : usr=96.95%, sys=2.13%, ctx=169, majf=0, minf=65 00:28:24.238 IO depths : 1=0.1%, 2=0.4%, 4=5.2%, 8=80.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=88.9%, 8=7.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename1: (groupid=0, jobs=1): err= 0: pid=643931: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:28:24.238 slat (usec): min=9, max=102, avg=36.44, stdev=12.71 00:28:24.238 clat (usec): min=12218, max=74766, avg=33266.48, stdev=2909.30 00:28:24.238 lat (usec): min=12260, max=74804, avg=33302.92, stdev=2909.02 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.238 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.238 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.238 | 99.00th=[40109], 99.50th=[42730], 99.90th=[74974], 99.95th=[74974], 00:28:24.238 | 99.99th=[74974] 00:28:24.238 bw ( KiB/s): min= 1667, max= 2039, per=4.12%, avg=1900.50, stdev=73.75, samples=20 00:28:24.238 iops : min= 416, max= 509, avg=475.05, stdev=18.49, samples=20 00:28:24.238 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:28:24.238 cpu : usr=96.86%, sys=2.10%, ctx=194, majf=0, minf=34 00:28:24.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename1: (groupid=0, jobs=1): err= 0: pid=643932: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.8MiB/10029msec) 00:28:24.238 slat (nsec): min=8650, max=80669, avg=27878.07, stdev=12036.99 00:28:24.238 clat (usec): min=12915, max=45099, avg=33204.01, stdev=1716.39 00:28:24.238 lat (usec): min=12952, max=45119, avg=33231.88, stdev=1714.88 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[28705], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.238 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:28:24.238 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.238 | 99.00th=[36963], 99.50th=[38011], 99.90th=[44827], 99.95th=[44827], 00:28:24.238 | 99.99th=[45351] 00:28:24.238 bw ( KiB/s): min= 1792, max= 1923, per=4.15%, avg=1913.75, stdev=28.66, samples=20 00:28:24.238 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:28:24.238 lat (msec) : 20=0.33%, 50=99.67% 00:28:24.238 cpu : usr=96.28%, sys=2.56%, ctx=118, majf=0, minf=42 00:28:24.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename2: (groupid=0, jobs=1): err= 0: pid=643933: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=477, BW=1910KiB/s (1955kB/s)(18.7MiB/10021msec) 00:28:24.238 slat (usec): min=8, max=124, avg=37.73, stdev=23.29 00:28:24.238 clat (usec): min=22004, max=42819, avg=33192.86, stdev=1246.58 00:28:24.238 lat (usec): min=22041, max=42845, avg=33230.59, stdev=1243.22 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:24.238 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.238 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.238 | 99.00th=[36439], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:28:24.238 | 99.99th=[42730] 00:28:24.238 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1907.20, stdev=39.40, samples=20 00:28:24.238 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:28:24.238 lat (msec) : 50=100.00% 00:28:24.238 cpu : usr=97.54%, sys=1.67%, ctx=74, majf=0, minf=55 00:28:24.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.238 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.238 filename2: (groupid=0, jobs=1): err= 0: pid=643934: Thu Jul 25 09:42:55 2024 00:28:24.238 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:28:24.238 slat (usec): min=9, max=137, avg=36.53, stdev=12.95 00:28:24.238 clat (usec): min=24026, max=45074, avg=33257.41, stdev=1163.28 00:28:24.238 lat (usec): min=24038, max=45095, avg=33293.95, stdev=1161.97 00:28:24.238 clat percentiles (usec): 00:28:24.238 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.239 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.239 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[37487], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:28:24.239 | 99.99th=[44827] 00:28:24.239 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1906.53, stdev=40.36, samples=19 00:28:24.239 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:28:24.239 lat (msec) : 50=100.00% 00:28:24.239 cpu : usr=98.08%, sys=1.43%, ctx=24, majf=0, minf=43 00:28:24.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.239 filename2: (groupid=0, jobs=1): err= 0: pid=643935: Thu Jul 25 09:42:55 2024 00:28:24.239 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.8MiB/10029msec) 00:28:24.239 slat (nsec): min=10800, max=76341, avg=32999.50, stdev=9570.79 00:28:24.239 clat (usec): min=12876, max=45114, avg=33140.79, stdev=1726.07 00:28:24.239 lat (usec): min=12913, max=45137, avg=33173.79, stdev=1725.56 00:28:24.239 clat percentiles (usec): 00:28:24.239 | 1.00th=[28705], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.239 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.239 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[36963], 99.50th=[38011], 99.90th=[44827], 99.95th=[44827], 00:28:24.239 | 99.99th=[45351] 00:28:24.239 bw ( KiB/s): min= 1792, max= 1923, per=4.15%, avg=1913.75, stdev=28.66, samples=20 00:28:24.239 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:28:24.239 lat (msec) : 20=0.33%, 50=99.67% 00:28:24.239 cpu : usr=97.10%, sys=1.99%, ctx=109, majf=0, minf=38 00:28:24.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.239 filename2: (groupid=0, jobs=1): err= 0: pid=643936: Thu Jul 25 09:42:55 2024 00:28:24.239 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:28:24.239 slat (usec): min=6, max=180, avg=14.00, stdev=10.71 00:28:24.239 clat (usec): min=12545, max=47582, avg=33363.32, stdev=1637.71 00:28:24.239 lat (usec): min=12653, max=47604, avg=33377.32, stdev=1632.23 00:28:24.239 clat percentiles (usec): 00:28:24.239 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:28:24.239 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:28:24.239 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[36963], 99.50th=[38536], 99.90th=[44827], 99.95th=[44827], 00:28:24.239 | 99.99th=[47449] 00:28:24.239 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1907.20, stdev=57.24, samples=20 00:28:24.239 iops : min= 448, max= 512, avg=476.80, stdev=14.31, samples=20 00:28:24.239 lat (msec) : 20=0.38%, 50=99.62% 00:28:24.239 cpu : usr=97.17%, sys=1.96%, ctx=186, majf=0, minf=37 00:28:24.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.239 filename2: (groupid=0, jobs=1): err= 0: pid=643937: Thu Jul 25 09:42:55 2024 00:28:24.239 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:28:24.239 slat (usec): min=6, max=109, avg=32.68, stdev=16.89 00:28:24.239 clat (usec): min=15286, max=62717, avg=33284.64, stdev=2246.05 00:28:24.239 lat (usec): min=15301, max=62732, avg=33317.32, stdev=2245.40 00:28:24.239 clat percentiles (usec): 00:28:24.239 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.239 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:28:24.239 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[38011], 99.50th=[44303], 99.90th=[62653], 99.95th=[62653], 00:28:24.239 | 99.99th=[62653] 00:28:24.239 bw ( KiB/s): min= 1667, max= 1920, per=4.12%, avg=1899.95, stdev=61.88, samples=20 00:28:24.239 iops : min= 416, max= 480, avg=474.95, stdev=15.62, samples=20 00:28:24.239 lat (msec) : 20=0.38%, 50=99.29%, 100=0.34% 00:28:24.239 cpu : usr=97.23%, sys=1.91%, ctx=122, majf=0, minf=33 00:28:24.239 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.239 filename2: (groupid=0, jobs=1): err= 0: pid=643938: Thu Jul 25 09:42:55 2024 00:28:24.239 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10016msec) 00:28:24.239 slat (usec): min=8, max=152, avg=47.20, stdev=25.27 00:28:24.239 clat (usec): min=19401, max=47519, avg=33193.51, stdev=1371.12 00:28:24.239 lat (usec): min=19412, max=47543, avg=33240.71, stdev=1364.35 00:28:24.239 clat percentiles (usec): 00:28:24.239 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:28:24.239 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:28:24.239 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[38011], 99.50th=[44827], 99.90th=[46924], 99.95th=[46924], 00:28:24.239 | 99.99th=[47449] 00:28:24.239 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1900.80, stdev=62.64, samples=20 00:28:24.239 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:28:24.239 lat (msec) : 20=0.04%, 50=99.96% 00:28:24.239 cpu : usr=98.05%, sys=1.49%, ctx=30, majf=0, minf=34 00:28:24.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.239 filename2: (groupid=0, jobs=1): err= 0: pid=643939: Thu Jul 25 09:42:55 2024 00:28:24.239 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10004msec) 00:28:24.239 slat (nsec): min=8726, max=66195, avg=32644.03, stdev=8623.68 00:28:24.239 clat (usec): min=21568, max=44034, avg=33287.62, stdev=1334.26 00:28:24.239 lat (usec): min=21613, max=44069, avg=33320.26, stdev=1333.26 00:28:24.239 clat percentiles (usec): 00:28:24.239 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:24.239 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.239 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:28:24.239 | 99.99th=[43779] 00:28:24.239 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1899.79, stdev=64.19, samples=19 00:28:24.239 iops : min= 448, max= 512, avg=474.95, stdev=16.05, samples=19 00:28:24.239 lat (msec) : 50=100.00% 00:28:24.239 cpu : usr=97.14%, sys=1.88%, ctx=165, majf=0, minf=36 00:28:24.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:24.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.239 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.239 filename2: (groupid=0, jobs=1): err= 0: pid=643940: Thu Jul 25 09:42:55 2024 00:28:24.239 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:28:24.239 slat (nsec): min=8243, max=94978, avg=34555.52, stdev=10412.85 00:28:24.239 clat (usec): min=12165, max=74583, avg=33276.09, stdev=2901.74 00:28:24.239 lat (usec): min=12179, max=74618, avg=33310.64, stdev=2901.90 00:28:24.239 clat percentiles (usec): 00:28:24.239 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:24.239 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:28:24.239 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:28:24.239 | 99.00th=[40109], 99.50th=[42730], 99.90th=[73925], 99.95th=[74974], 00:28:24.239 | 99.99th=[74974] 00:28:24.239 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1900.80, stdev=62.64, samples=20 00:28:24.239 iops : min= 416, max= 480, avg=475.20, stdev=15.66, samples=20 00:28:24.239 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:28:24.240 cpu : usr=97.83%, sys=1.59%, ctx=60, majf=0, minf=39 00:28:24.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:24.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.240 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.240 00:28:24.240 Run status group 0 (all jobs): 00:28:24.240 READ: bw=45.0MiB/s (47.2MB/s), 1904KiB/s-2250KiB/s (1950kB/s-2304kB/s), io=451MiB (473MB), run=10004-10030msec 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 bdev_null0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 [2024-07-25 09:42:55.443568] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 bdev_null1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.240 { 00:28:24.240 "params": { 00:28:24.240 "name": "Nvme$subsystem", 00:28:24.240 "trtype": "$TEST_TRANSPORT", 00:28:24.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.240 "adrfam": "ipv4", 00:28:24.240 "trsvcid": "$NVMF_PORT", 00:28:24.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.240 "hdgst": ${hdgst:-false}, 00:28:24.240 "ddgst": ${ddgst:-false} 00:28:24.240 }, 00:28:24.240 "method": "bdev_nvme_attach_controller" 00:28:24.240 } 00:28:24.240 EOF 00:28:24.240 )") 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:28:24.240 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.241 { 00:28:24.241 "params": { 00:28:24.241 "name": "Nvme$subsystem", 00:28:24.241 "trtype": "$TEST_TRANSPORT", 00:28:24.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.241 "adrfam": "ipv4", 00:28:24.241 "trsvcid": "$NVMF_PORT", 00:28:24.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.241 "hdgst": ${hdgst:-false}, 00:28:24.241 "ddgst": ${ddgst:-false} 00:28:24.241 }, 00:28:24.241 "method": "bdev_nvme_attach_controller" 00:28:24.241 } 00:28:24.241 EOF 00:28:24.241 )") 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:24.241 "params": { 00:28:24.241 "name": "Nvme0", 00:28:24.241 "trtype": "tcp", 00:28:24.241 "traddr": "10.0.0.2", 00:28:24.241 "adrfam": "ipv4", 00:28:24.241 "trsvcid": "4420", 00:28:24.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.241 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.241 "hdgst": false, 00:28:24.241 "ddgst": false 00:28:24.241 }, 00:28:24.241 "method": "bdev_nvme_attach_controller" 00:28:24.241 },{ 00:28:24.241 "params": { 00:28:24.241 "name": "Nvme1", 00:28:24.241 "trtype": "tcp", 00:28:24.241 "traddr": "10.0.0.2", 00:28:24.241 "adrfam": "ipv4", 00:28:24.241 "trsvcid": "4420", 00:28:24.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:24.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:24.241 "hdgst": false, 00:28:24.241 "ddgst": false 00:28:24.241 }, 00:28:24.241 "method": "bdev_nvme_attach_controller" 00:28:24.241 }' 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:24.241 09:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.241 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:24.241 ... 00:28:24.241 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:24.241 ... 00:28:24.241 fio-3.35 00:28:24.241 Starting 4 threads 00:28:24.241 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.503 00:28:29.503 filename0: (groupid=0, jobs=1): err= 0: pid=645205: Thu Jul 25 09:43:01 2024 00:28:29.503 read: IOPS=1754, BW=13.7MiB/s (14.4MB/s)(68.5MiB/5001msec) 00:28:29.503 slat (nsec): min=3995, max=62322, avg=20887.49, stdev=9082.73 00:28:29.503 clat (usec): min=827, max=9052, avg=4483.66, stdev=688.11 00:28:29.503 lat (usec): min=847, max=9064, avg=4504.55, stdev=686.99 00:28:29.503 clat percentiles (usec): 00:28:29.503 | 1.00th=[ 2311], 5.00th=[ 3818], 10.00th=[ 3982], 20.00th=[ 4228], 00:28:29.503 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:28:29.503 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5735], 00:28:29.503 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 8225], 00:28:29.503 | 99.99th=[ 9110] 00:28:29.503 bw ( KiB/s): min=13456, max=14368, per=24.20%, avg=13857.78, stdev=325.69, samples=9 00:28:29.503 iops : min= 1682, max= 1796, avg=1732.22, stdev=40.71, samples=9 00:28:29.503 lat (usec) : 1000=0.05% 00:28:29.503 lat (msec) : 2=0.68%, 4=10.43%, 10=88.84% 00:28:29.503 cpu : usr=95.36%, sys=4.14%, ctx=7, majf=0, minf=11 00:28:29.503 IO depths : 1=0.3%, 2=17.8%, 4=55.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 issued rwts: total=8772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.503 filename0: (groupid=0, jobs=1): err= 0: pid=645206: Thu Jul 25 09:43:01 2024 00:28:29.503 read: IOPS=1809, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5001msec) 00:28:29.503 slat (nsec): min=6571, max=69014, avg=18316.13, stdev=10364.76 00:28:29.503 clat (usec): min=812, max=8077, avg=4357.11, stdev=475.36 00:28:29.503 lat (usec): min=826, max=8092, avg=4375.43, stdev=475.16 00:28:29.503 clat percentiles (usec): 00:28:29.503 | 1.00th=[ 3032], 5.00th=[ 3720], 10.00th=[ 3884], 20.00th=[ 4080], 00:28:29.503 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:28:29.503 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4883], 00:28:29.503 | 99.00th=[ 6128], 99.50th=[ 6783], 99.90th=[ 7439], 99.95th=[ 7767], 00:28:29.503 | 99.99th=[ 8094] 00:28:29.503 bw ( KiB/s): min=14096, max=14621, per=25.10%, avg=14371.22, stdev=192.64, samples=9 00:28:29.503 iops : min= 1762, max= 1827, avg=1796.33, stdev=23.98, samples=9 00:28:29.503 lat (usec) : 1000=0.01% 00:28:29.503 lat (msec) : 2=0.19%, 4=15.04%, 10=84.76% 00:28:29.503 cpu : usr=95.40%, sys=4.06%, ctx=13, majf=0, minf=2 00:28:29.503 IO depths : 1=0.3%, 2=19.1%, 4=54.1%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 issued rwts: total=9048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.503 filename1: (groupid=0, jobs=1): err= 0: pid=645207: Thu Jul 25 09:43:01 2024 00:28:29.503 read: IOPS=1818, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5002msec) 00:28:29.503 slat (nsec): min=3950, max=64721, avg=15245.28, stdev=8260.78 00:28:29.503 clat (usec): min=1006, max=7589, avg=4347.61, stdev=440.66 00:28:29.503 lat (usec): min=1020, max=7602, avg=4362.85, stdev=440.64 00:28:29.503 clat percentiles (usec): 00:28:29.503 | 1.00th=[ 3032], 5.00th=[ 3720], 10.00th=[ 3916], 20.00th=[ 4080], 00:28:29.503 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:28:29.503 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:28:29.503 | 99.00th=[ 5735], 99.50th=[ 6194], 99.90th=[ 6915], 99.95th=[ 7308], 00:28:29.503 | 99.99th=[ 7570] 00:28:29.503 bw ( KiB/s): min=14144, max=14640, per=25.25%, avg=14453.33, stdev=194.65, samples=9 00:28:29.503 iops : min= 1768, max= 1830, avg=1806.67, stdev=24.33, samples=9 00:28:29.503 lat (msec) : 2=0.30%, 4=13.76%, 10=85.94% 00:28:29.503 cpu : usr=95.48%, sys=4.04%, ctx=8, majf=0, minf=0 00:28:29.503 IO depths : 1=0.7%, 2=12.5%, 4=60.4%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 issued rwts: total=9096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.503 filename1: (groupid=0, jobs=1): err= 0: pid=645208: Thu Jul 25 09:43:01 2024 00:28:29.503 read: IOPS=1775, BW=13.9MiB/s (14.5MB/s)(69.4MiB/5001msec) 00:28:29.503 slat (nsec): min=4041, max=69041, avg=21011.72, stdev=10336.47 00:28:29.503 clat (usec): min=829, max=8081, avg=4424.94, stdev=612.89 00:28:29.503 lat (usec): min=850, max=8089, avg=4445.95, stdev=611.97 00:28:29.503 clat percentiles (usec): 00:28:29.503 | 1.00th=[ 2343], 5.00th=[ 3785], 10.00th=[ 3949], 20.00th=[ 4146], 00:28:29.503 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:28:29.503 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5538], 00:28:29.503 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 7898], 00:28:29.503 | 99.99th=[ 8094] 00:28:29.503 bw ( KiB/s): min=13691, max=14320, per=24.52%, avg=14038.56, stdev=215.39, samples=9 00:28:29.503 iops : min= 1711, max= 1790, avg=1754.78, stdev=27.00, samples=9 00:28:29.503 lat (usec) : 1000=0.06% 00:28:29.503 lat (msec) : 2=0.71%, 4=11.40%, 10=87.84% 00:28:29.503 cpu : usr=95.60%, sys=3.74%, ctx=56, majf=0, minf=9 00:28:29.503 IO depths : 1=0.7%, 2=20.6%, 4=53.4%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.503 issued rwts: total=8880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.503 00:28:29.503 Run status group 0 (all jobs): 00:28:29.503 READ: bw=55.9MiB/s (58.6MB/s), 13.7MiB/s-14.2MiB/s (14.4MB/s-14.9MB/s), io=280MiB (293MB), run=5001-5002msec 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:29.503 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 00:28:29.504 real 0m24.174s 00:28:29.504 user 4m31.372s 00:28:29.504 sys 0m7.175s 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 ************************************ 00:28:29.504 END TEST fio_dif_rand_params 00:28:29.504 ************************************ 00:28:29.504 09:43:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:29.504 09:43:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:29.504 09:43:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 ************************************ 00:28:29.504 START TEST fio_dif_digest 00:28:29.504 ************************************ 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 bdev_null0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 [2024-07-25 09:43:01.954287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:29.504 { 00:28:29.504 "params": { 00:28:29.504 "name": "Nvme$subsystem", 00:28:29.504 "trtype": "$TEST_TRANSPORT", 00:28:29.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.504 "adrfam": "ipv4", 00:28:29.504 "trsvcid": "$NVMF_PORT", 00:28:29.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.504 "hdgst": ${hdgst:-false}, 00:28:29.504 "ddgst": ${ddgst:-false} 00:28:29.504 }, 00:28:29.504 "method": "bdev_nvme_attach_controller" 00:28:29.504 } 00:28:29.504 EOF 00:28:29.504 )") 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:29.504 "params": { 00:28:29.504 "name": "Nvme0", 00:28:29.504 "trtype": "tcp", 00:28:29.504 "traddr": "10.0.0.2", 00:28:29.504 "adrfam": "ipv4", 00:28:29.504 "trsvcid": "4420", 00:28:29.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.504 "hdgst": true, 00:28:29.504 "ddgst": true 00:28:29.504 }, 00:28:29.504 "method": "bdev_nvme_attach_controller" 00:28:29.504 }' 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:29.504 09:43:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:29.504 09:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:29.504 09:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:29.504 09:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:29.504 09:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.504 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:29.504 ... 00:28:29.504 fio-3.35 00:28:29.504 Starting 3 threads 00:28:29.762 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.955 00:28:41.956 filename0: (groupid=0, jobs=1): err= 0: pid=646074: Thu Jul 25 09:43:12 2024 00:28:41.956 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(267MiB/10049msec) 00:28:41.956 slat (nsec): min=4410, max=42154, avg=14362.17, stdev=3409.16 00:28:41.956 clat (usec): min=10842, max=52605, avg=14087.94, stdev=1569.29 00:28:41.956 lat (usec): min=10856, max=52620, avg=14102.30, stdev=1569.19 00:28:41.956 clat percentiles (usec): 00:28:41.956 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12780], 20.00th=[13173], 00:28:41.956 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:28:41.956 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:28:41.956 | 99.00th=[16909], 99.50th=[17171], 99.90th=[25297], 99.95th=[50594], 00:28:41.956 | 99.99th=[52691] 00:28:41.956 bw ( KiB/s): min=26112, max=28160, per=34.28%, avg=27289.60, stdev=465.42, samples=20 00:28:41.956 iops : min= 204, max= 220, avg=213.20, stdev= 3.64, samples=20 00:28:41.956 lat (msec) : 20=99.77%, 50=0.14%, 100=0.09% 00:28:41.956 cpu : usr=90.26%, sys=9.25%, ctx=21, majf=0, minf=128 00:28:41.956 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.956 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.956 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.956 filename0: (groupid=0, jobs=1): err= 0: pid=646075: Thu Jul 25 09:43:12 2024 00:28:41.956 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10049msec) 00:28:41.956 slat (nsec): min=4896, max=51985, avg=14695.73, stdev=3681.69 00:28:41.956 clat (usec): min=11002, max=52642, avg=14276.11, stdev=1494.10 00:28:41.956 lat (usec): min=11016, max=52656, avg=14290.81, stdev=1494.00 00:28:41.956 clat percentiles (usec): 00:28:41.956 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:28:41.956 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:28:41.956 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:28:41.956 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21627], 99.95th=[48497], 00:28:41.956 | 99.99th=[52691] 00:28:41.956 bw ( KiB/s): min=25856, max=27648, per=33.82%, avg=26921.00, stdev=542.45, samples=20 00:28:41.956 iops : min= 202, max= 216, avg=210.30, stdev= 4.27, samples=20 00:28:41.956 lat (msec) : 20=99.76%, 50=0.19%, 100=0.05% 00:28:41.956 cpu : usr=90.91%, sys=8.59%, ctx=24, majf=0, minf=153 00:28:41.956 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.956 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.956 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.956 filename0: (groupid=0, jobs=1): err= 0: pid=646076: Thu Jul 25 09:43:12 2024 00:28:41.956 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10048msec) 00:28:41.956 slat (nsec): min=4909, max=36618, avg=14517.56, stdev=3477.56 00:28:41.956 clat (usec): min=11459, max=51979, avg=14957.20, stdev=1508.95 00:28:41.956 lat (usec): min=11472, max=51993, avg=14971.72, stdev=1508.92 00:28:41.956 clat percentiles (usec): 00:28:41.956 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:28:41.956 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:28:41.956 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:28:41.956 | 99.00th=[17433], 99.50th=[17957], 99.90th=[23200], 99.95th=[48497], 00:28:41.956 | 99.99th=[52167] 00:28:41.956 bw ( KiB/s): min=24832, max=26368, per=32.28%, avg=25702.40, stdev=480.01, samples=20 00:28:41.956 iops : min= 194, max= 206, avg=200.80, stdev= 3.75, samples=20 00:28:41.956 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:28:41.956 cpu : usr=91.24%, sys=8.26%, ctx=23, majf=0, minf=101 00:28:41.956 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.956 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.956 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.956 00:28:41.956 Run status group 0 (all jobs): 00:28:41.956 READ: bw=77.7MiB/s (81.5MB/s), 25.0MiB/s-26.5MiB/s (26.2MB/s-27.8MB/s), io=781MiB (819MB), run=10048-10049msec 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.956 00:28:41.956 real 0m11.095s 00:28:41.956 user 0m28.654s 00:28:41.956 sys 0m2.918s 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.956 09:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.956 ************************************ 00:28:41.956 END TEST fio_dif_digest 00:28:41.956 ************************************ 00:28:41.956 09:43:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:41.956 09:43:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.956 rmmod nvme_tcp 00:28:41.956 rmmod nvme_fabrics 00:28:41.956 rmmod nvme_keyring 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 640024 ']' 00:28:41.956 09:43:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 640024 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 640024 ']' 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 640024 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 640024 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 640024' 00:28:41.957 killing process with pid 640024 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@967 -- # kill 640024 00:28:41.957 09:43:13 nvmf_dif -- common/autotest_common.sh@972 -- # wait 640024 00:28:41.957 09:43:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:41.957 09:43:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:41.957 Waiting for block devices as requested 00:28:41.957 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:28:41.957 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:41.957 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:42.215 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:42.215 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:42.215 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:42.474 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:42.474 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:42.474 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:42.474 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:42.732 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:42.732 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:42.732 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:42.732 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:42.732 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:42.989 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:42.989 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:42.989 09:43:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.989 09:43:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.989 09:43:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.989 09:43:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.990 09:43:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.990 09:43:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:42.990 09:43:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.532 09:43:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:45.532 00:28:45.532 real 1m6.703s 00:28:45.532 user 6m27.503s 00:28:45.532 sys 0m19.554s 00:28:45.532 09:43:17 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:45.532 09:43:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:45.532 ************************************ 00:28:45.532 END TEST nvmf_dif 00:28:45.532 ************************************ 00:28:45.532 09:43:17 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:45.532 09:43:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:45.532 09:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.532 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:28:45.532 ************************************ 00:28:45.532 START TEST nvmf_abort_qd_sizes 00:28:45.532 ************************************ 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:45.532 * Looking for test storage... 00:28:45.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.532 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:45.533 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:47.035 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.035 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:47.035 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.036 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:47.294 Found net devices under 0000:82:00.0: cvl_0_0 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:47.294 Found net devices under 0000:82:00.1: cvl_0_1 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:47.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:28:47.294 00:28:47.294 --- 10.0.0.2 ping statistics --- 00:28:47.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.294 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:28:47.294 00:28:47.294 --- 10.0.0.1 ping statistics --- 00:28:47.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.294 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:47.294 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.295 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:47.295 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:47.295 09:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:48.669 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.669 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.669 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:50.573 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=650895 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 650895 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 650895 ']' 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.573 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 [2024-07-25 09:43:23.142213] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:28:50.573 [2024-07-25 09:43:23.142285] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.573 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.573 [2024-07-25 09:43:23.205180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.838 [2024-07-25 09:43:23.313431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.838 [2024-07-25 09:43:23.313475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.838 [2024-07-25 09:43:23.313499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.838 [2024-07-25 09:43:23.313510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.838 [2024-07-25 09:43:23.313520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.838 [2024-07-25 09:43:23.313567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.838 [2024-07-25 09:43:23.313625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.838 [2024-07-25 09:43:23.313670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.838 [2024-07-25 09:43:23.313673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:81:00.0 ]] 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:81:00.0 ]] 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:81:00.0 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:81:00.0 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.838 09:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.838 ************************************ 00:28:50.838 START TEST spdk_target_abort 00:28:50.838 ************************************ 00:28:50.838 09:43:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:50.838 09:43:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:50.838 09:43:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:81:00.0 -b spdk_target 00:28:50.838 09:43:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.838 09:43:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.122 spdk_targetn1 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.122 [2024-07-25 09:43:26.328946] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.122 [2024-07-25 09:43:26.361148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:54.122 09:43:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:54.122 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.401 Initializing NVMe Controllers 00:28:57.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:57.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:57.401 Initialization complete. Launching workers. 00:28:57.401 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11674, failed: 0 00:28:57.401 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1315, failed to submit 10359 00:28:57.401 success 697, unsuccess 618, failed 0 00:28:57.401 09:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:57.401 09:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:57.401 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.673 Initializing NVMe Controllers 00:29:00.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:00.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:00.673 Initialization complete. Launching workers. 00:29:00.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8552, failed: 0 00:29:00.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7315 00:29:00.673 success 326, unsuccess 911, failed 0 00:29:00.673 09:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:00.673 09:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:00.673 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.941 Initializing NVMe Controllers 00:29:03.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:03.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:03.941 Initialization complete. Launching workers. 00:29:03.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31666, failed: 0 00:29:03.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2798, failed to submit 28868 00:29:03.941 success 548, unsuccess 2250, failed 0 00:29:03.941 09:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:03.941 09:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.941 09:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.941 09:43:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.941 09:43:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:03.941 09:43:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.941 09:43:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 650895 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 650895 ']' 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 650895 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 650895 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 650895' 00:29:05.837 killing process with pid 650895 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 650895 00:29:05.837 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 650895 00:29:06.095 00:29:06.095 real 0m15.091s 00:29:06.095 user 0m57.006s 00:29:06.095 sys 0m2.867s 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:06.095 ************************************ 00:29:06.095 END TEST spdk_target_abort 00:29:06.095 ************************************ 00:29:06.095 09:43:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:06.095 09:43:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:06.095 09:43:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.095 09:43:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:06.095 ************************************ 00:29:06.095 START TEST kernel_target_abort 00:29:06.095 ************************************ 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:06.095 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:07.027 Waiting for block devices as requested 00:29:07.027 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:29:07.285 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:07.286 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:07.286 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:07.544 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:07.544 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:07.544 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:07.544 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:07.802 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:07.802 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:07.802 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:07.802 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:08.061 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:08.061 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:08.061 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:08.061 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:08.320 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:08.320 No valid GPT data, bailing 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:08.320 09:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:08.320 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:29:08.578 00:29:08.578 Discovery Log Number of Records 2, Generation counter 2 00:29:08.578 =====Discovery Log Entry 0====== 00:29:08.578 trtype: tcp 00:29:08.578 adrfam: ipv4 00:29:08.578 subtype: current discovery subsystem 00:29:08.578 treq: not specified, sq flow control disable supported 00:29:08.578 portid: 1 00:29:08.578 trsvcid: 4420 00:29:08.578 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:08.578 traddr: 10.0.0.1 00:29:08.578 eflags: none 00:29:08.578 sectype: none 00:29:08.578 =====Discovery Log Entry 1====== 00:29:08.578 trtype: tcp 00:29:08.578 adrfam: ipv4 00:29:08.578 subtype: nvme subsystem 00:29:08.578 treq: not specified, sq flow control disable supported 00:29:08.578 portid: 1 00:29:08.578 trsvcid: 4420 00:29:08.578 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:08.578 traddr: 10.0.0.1 00:29:08.578 eflags: none 00:29:08.578 sectype: none 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:08.578 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:08.578 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.858 Initializing NVMe Controllers 00:29:11.859 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:11.859 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:11.859 Initialization complete. Launching workers. 00:29:11.859 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 46033, failed: 0 00:29:11.859 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 46033, failed to submit 0 00:29:11.859 success 0, unsuccess 46033, failed 0 00:29:11.859 09:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.859 09:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.859 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.135 Initializing NVMe Controllers 00:29:15.135 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:15.135 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:15.135 Initialization complete. Launching workers. 00:29:15.135 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82479, failed: 0 00:29:15.135 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20790, failed to submit 61689 00:29:15.135 success 0, unsuccess 20790, failed 0 00:29:15.135 09:43:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:15.135 09:43:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:15.135 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.414 Initializing NVMe Controllers 00:29:18.414 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:18.414 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:18.414 Initialization complete. Launching workers. 00:29:18.414 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84457, failed: 0 00:29:18.414 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21106, failed to submit 63351 00:29:18.414 success 0, unsuccess 21106, failed 0 00:29:18.414 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:18.414 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:18.414 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:18.414 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.414 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:18.414 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:18.415 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.415 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:18.415 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:18.415 09:43:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:18.981 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:18.981 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:18.981 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:20.882 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:29:20.882 00:29:20.882 real 0m14.950s 00:29:20.882 user 0m6.580s 00:29:20.882 sys 0m2.936s 00:29:20.882 09:43:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.882 09:43:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.882 ************************************ 00:29:20.882 END TEST kernel_target_abort 00:29:20.882 ************************************ 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.882 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.882 rmmod nvme_tcp 00:29:21.140 rmmod nvme_fabrics 00:29:21.140 rmmod nvme_keyring 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 650895 ']' 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 650895 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 650895 ']' 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 650895 00:29:21.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (650895) - No such process 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 650895 is not found' 00:29:21.140 Process with pid 650895 is not found 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:21.140 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:22.072 Waiting for block devices as requested 00:29:22.331 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:29:22.331 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:22.331 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:22.589 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:22.589 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:22.589 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:22.847 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:22.847 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:22.847 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:22.847 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:22.847 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:23.106 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:23.106 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:23.106 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:23.365 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:23.365 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:23.365 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:23.625 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.525 09:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:25.525 00:29:25.525 real 0m40.402s 00:29:25.525 user 1m5.716s 00:29:25.525 sys 0m9.222s 00:29:25.525 09:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.525 09:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:25.525 ************************************ 00:29:25.525 END TEST nvmf_abort_qd_sizes 00:29:25.525 ************************************ 00:29:25.525 09:43:58 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:25.525 09:43:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:25.525 09:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.525 09:43:58 -- common/autotest_common.sh@10 -- # set +x 00:29:25.525 ************************************ 00:29:25.525 START TEST keyring_file 00:29:25.525 ************************************ 00:29:25.525 09:43:58 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:25.784 * Looking for test storage... 00:29:25.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:25.784 09:43:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.784 09:43:58 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.784 09:43:58 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.784 09:43:58 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.784 09:43:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.784 09:43:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.784 09:43:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.784 09:43:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:25.784 09:43:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.784 09:43:58 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.784 09:43:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:25.784 09:43:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:25.784 09:43:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:25.784 09:43:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XqI3I3o7o1 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XqI3I3o7o1 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XqI3I3o7o1 00:29:25.785 09:43:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XqI3I3o7o1 00:29:25.785 09:43:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JLk1Q0KnwR 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:25.785 09:43:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JLk1Q0KnwR 00:29:25.785 09:43:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JLk1Q0KnwR 00:29:25.785 09:43:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.JLk1Q0KnwR 00:29:25.785 09:43:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=656792 00:29:25.785 09:43:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:25.785 09:43:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 656792 00:29:25.785 09:43:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 656792 ']' 00:29:25.785 09:43:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.785 09:43:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.785 09:43:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.785 09:43:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.785 09:43:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:25.785 [2024-07-25 09:43:58.439927] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:29:25.785 [2024-07-25 09:43:58.440007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656792 ] 00:29:25.785 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.785 [2024-07-25 09:43:58.496484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.043 [2024-07-25 09:43:58.602754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:26.301 09:43:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:26.301 [2024-07-25 09:43:58.849332] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.301 null0 00:29:26.301 [2024-07-25 09:43:58.881425] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:26.301 [2024-07-25 09:43:58.881882] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:26.301 [2024-07-25 09:43:58.889403] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.301 09:43:58 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:26.301 [2024-07-25 09:43:58.897423] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:26.301 request: 00:29:26.301 { 00:29:26.301 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.301 "secure_channel": false, 00:29:26.301 "listen_address": { 00:29:26.301 "trtype": "tcp", 00:29:26.301 "traddr": "127.0.0.1", 00:29:26.301 "trsvcid": "4420" 00:29:26.301 }, 00:29:26.301 "method": "nvmf_subsystem_add_listener", 00:29:26.301 "req_id": 1 00:29:26.301 } 00:29:26.301 Got JSON-RPC error response 00:29:26.301 response: 00:29:26.301 { 00:29:26.301 "code": -32602, 00:29:26.301 "message": "Invalid parameters" 00:29:26.301 } 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:26.301 09:43:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:26.302 09:43:58 keyring_file -- keyring/file.sh@46 -- # bperfpid=656802 00:29:26.302 09:43:58 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:26.302 09:43:58 keyring_file -- keyring/file.sh@48 -- # waitforlisten 656802 /var/tmp/bperf.sock 00:29:26.302 09:43:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 656802 ']' 00:29:26.302 09:43:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.302 09:43:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:26.302 09:43:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.302 09:43:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:26.302 09:43:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:26.302 [2024-07-25 09:43:58.947520] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:29:26.302 [2024-07-25 09:43:58.947596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656802 ] 00:29:26.302 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.302 [2024-07-25 09:43:59.006132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.559 [2024-07-25 09:43:59.115078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.559 09:43:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.559 09:43:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:26.559 09:43:59 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:26.559 09:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:26.817 09:43:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JLk1Q0KnwR 00:29:26.817 09:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JLk1Q0KnwR 00:29:27.075 09:43:59 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:27.075 09:43:59 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:27.075 09:43:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.075 09:43:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:27.075 09:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.333 09:43:59 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.XqI3I3o7o1 == \/\t\m\p\/\t\m\p\.\X\q\I\3\I\3\o\7\o\1 ]] 00:29:27.333 09:43:59 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:27.333 09:43:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:27.333 09:43:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.333 09:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.333 09:43:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:27.591 09:44:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JLk1Q0KnwR == \/\t\m\p\/\t\m\p\.\J\L\k\1\Q\0\K\n\w\R ]] 00:29:27.591 09:44:00 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:27.591 09:44:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:27.591 09:44:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:27.591 09:44:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.591 09:44:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.591 09:44:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:27.849 09:44:00 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:27.849 09:44:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:27.849 09:44:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:27.849 09:44:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:27.849 09:44:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.849 09:44:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.849 09:44:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:28.106 09:44:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:28.106 09:44:00 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:28.106 09:44:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:28.391 [2024-07-25 09:44:00.932754] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:28.391 nvme0n1 00:29:28.391 09:44:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:28.391 09:44:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:28.391 09:44:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.391 09:44:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.391 09:44:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.391 09:44:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:28.677 09:44:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:28.677 09:44:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:28.677 09:44:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:28.677 09:44:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.677 09:44:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.677 09:44:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.677 09:44:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:28.936 09:44:01 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:28.936 09:44:01 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.936 Running I/O for 1 seconds... 00:29:30.306 00:29:30.306 Latency(us) 00:29:30.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.306 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:30.306 nvme0n1 : 1.01 9127.70 35.66 0.00 0.00 13976.47 6043.88 25243.50 00:29:30.306 =================================================================================================================== 00:29:30.306 Total : 9127.70 35.66 0.00 0.00 13976.47 6043.88 25243.50 00:29:30.306 0 00:29:30.306 09:44:02 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:30.306 09:44:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:30.306 09:44:02 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:30.306 09:44:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:30.306 09:44:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:30.306 09:44:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:30.306 09:44:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.306 09:44:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:30.564 09:44:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:30.564 09:44:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:30.564 09:44:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:30.564 09:44:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:30.564 09:44:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:30.564 09:44:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.564 09:44:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:30.820 09:44:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:30.820 09:44:03 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.820 09:44:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:30.820 09:44:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:31.077 [2024-07-25 09:44:03.629061] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:31.077 [2024-07-25 09:44:03.629903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b9a0 (107): Transport endpoint is not connected 00:29:31.077 [2024-07-25 09:44:03.630883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b9a0 (9): Bad file descriptor 00:29:31.077 [2024-07-25 09:44:03.631881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:31.077 [2024-07-25 09:44:03.631913] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:31.077 [2024-07-25 09:44:03.631929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:31.077 request: 00:29:31.077 { 00:29:31.077 "name": "nvme0", 00:29:31.077 "trtype": "tcp", 00:29:31.077 "traddr": "127.0.0.1", 00:29:31.077 "adrfam": "ipv4", 00:29:31.077 "trsvcid": "4420", 00:29:31.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:31.077 "prchk_reftag": false, 00:29:31.077 "prchk_guard": false, 00:29:31.077 "hdgst": false, 00:29:31.077 "ddgst": false, 00:29:31.077 "psk": "key1", 00:29:31.077 "method": "bdev_nvme_attach_controller", 00:29:31.077 "req_id": 1 00:29:31.077 } 00:29:31.077 Got JSON-RPC error response 00:29:31.077 response: 00:29:31.077 { 00:29:31.077 "code": -5, 00:29:31.077 "message": "Input/output error" 00:29:31.077 } 00:29:31.077 09:44:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:31.077 09:44:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:31.077 09:44:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:31.077 09:44:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:31.077 09:44:03 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:31.077 09:44:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:31.077 09:44:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:31.077 09:44:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.077 09:44:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:31.077 09:44:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.334 09:44:03 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:31.334 09:44:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:31.334 09:44:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:31.334 09:44:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:31.334 09:44:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.334 09:44:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:31.334 09:44:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.590 09:44:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:31.590 09:44:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:31.591 09:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:31.847 09:44:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:31.847 09:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:32.104 09:44:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:32.104 09:44:04 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:32.104 09:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.362 09:44:04 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:32.362 09:44:04 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.XqI3I3o7o1 00:29:32.362 09:44:04 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.362 09:44:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:32.362 09:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:32.620 [2024-07-25 09:44:05.135834] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XqI3I3o7o1': 0100660 00:29:32.620 [2024-07-25 09:44:05.135872] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:32.620 request: 00:29:32.620 { 00:29:32.620 "name": "key0", 00:29:32.620 "path": "/tmp/tmp.XqI3I3o7o1", 00:29:32.620 "method": "keyring_file_add_key", 00:29:32.620 "req_id": 1 00:29:32.620 } 00:29:32.620 Got JSON-RPC error response 00:29:32.620 response: 00:29:32.620 { 00:29:32.620 "code": -1, 00:29:32.620 "message": "Operation not permitted" 00:29:32.620 } 00:29:32.620 09:44:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:32.620 09:44:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:32.620 09:44:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:32.620 09:44:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:32.620 09:44:05 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.XqI3I3o7o1 00:29:32.620 09:44:05 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:32.620 09:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XqI3I3o7o1 00:29:32.877 09:44:05 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.XqI3I3o7o1 00:29:32.877 09:44:05 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:32.877 09:44:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.877 09:44:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.877 09:44:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.877 09:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.878 09:44:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.135 09:44:05 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:33.135 09:44:05 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.135 09:44:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.135 09:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.393 [2024-07-25 09:44:05.881894] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XqI3I3o7o1': No such file or directory 00:29:33.393 [2024-07-25 09:44:05.881930] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:33.393 [2024-07-25 09:44:05.881966] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:33.393 [2024-07-25 09:44:05.881979] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:33.393 [2024-07-25 09:44:05.881992] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:33.393 request: 00:29:33.393 { 00:29:33.393 "name": "nvme0", 00:29:33.393 "trtype": "tcp", 00:29:33.393 "traddr": "127.0.0.1", 00:29:33.393 "adrfam": "ipv4", 00:29:33.393 "trsvcid": "4420", 00:29:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:33.393 "prchk_reftag": false, 00:29:33.393 "prchk_guard": false, 00:29:33.393 "hdgst": false, 00:29:33.393 "ddgst": false, 00:29:33.393 "psk": "key0", 00:29:33.393 "method": "bdev_nvme_attach_controller", 00:29:33.393 "req_id": 1 00:29:33.393 } 00:29:33.393 Got JSON-RPC error response 00:29:33.393 response: 00:29:33.393 { 00:29:33.393 "code": -19, 00:29:33.393 "message": "No such device" 00:29:33.393 } 00:29:33.393 09:44:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:33.393 09:44:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.393 09:44:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.393 09:44:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.393 09:44:05 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:33.393 09:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:33.650 09:44:06 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DXWhfo7Jqk 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:33.650 09:44:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:33.650 09:44:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:33.650 09:44:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:33.650 09:44:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:33.650 09:44:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:33.650 09:44:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DXWhfo7Jqk 00:29:33.650 09:44:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DXWhfo7Jqk 00:29:33.650 09:44:06 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.DXWhfo7Jqk 00:29:33.650 09:44:06 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DXWhfo7Jqk 00:29:33.651 09:44:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DXWhfo7Jqk 00:29:33.908 09:44:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.908 09:44:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:34.165 nvme0n1 00:29:34.165 09:44:06 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:34.165 09:44:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.165 09:44:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.166 09:44:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.166 09:44:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.166 09:44:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.423 09:44:06 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:34.423 09:44:06 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:34.423 09:44:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:34.681 09:44:07 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:34.681 09:44:07 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:34.681 09:44:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.681 09:44:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.681 09:44:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.939 09:44:07 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:34.939 09:44:07 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:34.939 09:44:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.939 09:44:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.939 09:44:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.939 09:44:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.939 09:44:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.197 09:44:07 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:35.197 09:44:07 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:35.197 09:44:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:35.455 09:44:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:35.455 09:44:07 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:35.455 09:44:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.712 09:44:08 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:35.712 09:44:08 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DXWhfo7Jqk 00:29:35.712 09:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DXWhfo7Jqk 00:29:35.970 09:44:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JLk1Q0KnwR 00:29:35.970 09:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JLk1Q0KnwR 00:29:36.228 09:44:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.228 09:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.485 nvme0n1 00:29:36.485 09:44:09 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:36.485 09:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:36.743 09:44:09 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:36.743 "subsystems": [ 00:29:36.743 { 00:29:36.743 "subsystem": "keyring", 00:29:36.743 "config": [ 00:29:36.743 { 00:29:36.743 "method": "keyring_file_add_key", 00:29:36.743 "params": { 00:29:36.743 "name": "key0", 00:29:36.743 "path": "/tmp/tmp.DXWhfo7Jqk" 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "keyring_file_add_key", 00:29:36.743 "params": { 00:29:36.743 "name": "key1", 00:29:36.743 "path": "/tmp/tmp.JLk1Q0KnwR" 00:29:36.743 } 00:29:36.743 } 00:29:36.743 ] 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "subsystem": "iobuf", 00:29:36.743 "config": [ 00:29:36.743 { 00:29:36.743 "method": "iobuf_set_options", 00:29:36.743 "params": { 00:29:36.743 "small_pool_count": 8192, 00:29:36.743 "large_pool_count": 1024, 00:29:36.743 "small_bufsize": 8192, 00:29:36.743 "large_bufsize": 135168 00:29:36.743 } 00:29:36.743 } 00:29:36.743 ] 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "subsystem": "sock", 00:29:36.743 "config": [ 00:29:36.743 { 00:29:36.743 "method": "sock_set_default_impl", 00:29:36.743 "params": { 00:29:36.743 "impl_name": "posix" 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "sock_impl_set_options", 00:29:36.743 "params": { 00:29:36.743 "impl_name": "ssl", 00:29:36.743 "recv_buf_size": 4096, 00:29:36.743 "send_buf_size": 4096, 00:29:36.743 "enable_recv_pipe": true, 00:29:36.743 "enable_quickack": false, 00:29:36.743 "enable_placement_id": 0, 00:29:36.743 "enable_zerocopy_send_server": true, 00:29:36.743 "enable_zerocopy_send_client": false, 00:29:36.743 "zerocopy_threshold": 0, 00:29:36.743 "tls_version": 0, 00:29:36.743 "enable_ktls": false 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "sock_impl_set_options", 00:29:36.743 "params": { 00:29:36.743 "impl_name": "posix", 00:29:36.743 "recv_buf_size": 2097152, 00:29:36.743 "send_buf_size": 2097152, 00:29:36.743 "enable_recv_pipe": true, 00:29:36.743 "enable_quickack": false, 00:29:36.743 "enable_placement_id": 0, 00:29:36.743 "enable_zerocopy_send_server": true, 00:29:36.743 "enable_zerocopy_send_client": false, 00:29:36.743 "zerocopy_threshold": 0, 00:29:36.743 "tls_version": 0, 00:29:36.743 "enable_ktls": false 00:29:36.743 } 00:29:36.743 } 00:29:36.743 ] 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "subsystem": "vmd", 00:29:36.743 "config": [] 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "subsystem": "accel", 00:29:36.743 "config": [ 00:29:36.743 { 00:29:36.743 "method": "accel_set_options", 00:29:36.743 "params": { 00:29:36.743 "small_cache_size": 128, 00:29:36.743 "large_cache_size": 16, 00:29:36.743 "task_count": 2048, 00:29:36.743 "sequence_count": 2048, 00:29:36.743 "buf_count": 2048 00:29:36.743 } 00:29:36.743 } 00:29:36.743 ] 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "subsystem": "bdev", 00:29:36.743 "config": [ 00:29:36.743 { 00:29:36.743 "method": "bdev_set_options", 00:29:36.743 "params": { 00:29:36.743 "bdev_io_pool_size": 65535, 00:29:36.743 "bdev_io_cache_size": 256, 00:29:36.743 "bdev_auto_examine": true, 00:29:36.743 "iobuf_small_cache_size": 128, 00:29:36.743 "iobuf_large_cache_size": 16 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "bdev_raid_set_options", 00:29:36.743 "params": { 00:29:36.743 "process_window_size_kb": 1024, 00:29:36.743 "process_max_bandwidth_mb_sec": 0 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "bdev_iscsi_set_options", 00:29:36.743 "params": { 00:29:36.743 "timeout_sec": 30 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "bdev_nvme_set_options", 00:29:36.743 "params": { 00:29:36.743 "action_on_timeout": "none", 00:29:36.743 "timeout_us": 0, 00:29:36.743 "timeout_admin_us": 0, 00:29:36.743 "keep_alive_timeout_ms": 10000, 00:29:36.743 "arbitration_burst": 0, 00:29:36.743 "low_priority_weight": 0, 00:29:36.743 "medium_priority_weight": 0, 00:29:36.743 "high_priority_weight": 0, 00:29:36.743 "nvme_adminq_poll_period_us": 10000, 00:29:36.743 "nvme_ioq_poll_period_us": 0, 00:29:36.743 "io_queue_requests": 512, 00:29:36.743 "delay_cmd_submit": true, 00:29:36.743 "transport_retry_count": 4, 00:29:36.743 "bdev_retry_count": 3, 00:29:36.743 "transport_ack_timeout": 0, 00:29:36.743 "ctrlr_loss_timeout_sec": 0, 00:29:36.743 "reconnect_delay_sec": 0, 00:29:36.743 "fast_io_fail_timeout_sec": 0, 00:29:36.743 "disable_auto_failback": false, 00:29:36.743 "generate_uuids": false, 00:29:36.743 "transport_tos": 0, 00:29:36.743 "nvme_error_stat": false, 00:29:36.743 "rdma_srq_size": 0, 00:29:36.743 "io_path_stat": false, 00:29:36.743 "allow_accel_sequence": false, 00:29:36.743 "rdma_max_cq_size": 0, 00:29:36.743 "rdma_cm_event_timeout_ms": 0, 00:29:36.743 "dhchap_digests": [ 00:29:36.743 "sha256", 00:29:36.743 "sha384", 00:29:36.743 "sha512" 00:29:36.743 ], 00:29:36.743 "dhchap_dhgroups": [ 00:29:36.743 "null", 00:29:36.743 "ffdhe2048", 00:29:36.743 "ffdhe3072", 00:29:36.743 "ffdhe4096", 00:29:36.743 "ffdhe6144", 00:29:36.743 "ffdhe8192" 00:29:36.743 ] 00:29:36.743 } 00:29:36.743 }, 00:29:36.743 { 00:29:36.743 "method": "bdev_nvme_attach_controller", 00:29:36.743 "params": { 00:29:36.743 "name": "nvme0", 00:29:36.743 "trtype": "TCP", 00:29:36.744 "adrfam": "IPv4", 00:29:36.744 "traddr": "127.0.0.1", 00:29:36.744 "trsvcid": "4420", 00:29:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.744 "prchk_reftag": false, 00:29:36.744 "prchk_guard": false, 00:29:36.744 "ctrlr_loss_timeout_sec": 0, 00:29:36.744 "reconnect_delay_sec": 0, 00:29:36.744 "fast_io_fail_timeout_sec": 0, 00:29:36.744 "psk": "key0", 00:29:36.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.744 "hdgst": false, 00:29:36.744 "ddgst": false 00:29:36.744 } 00:29:36.744 }, 00:29:36.744 { 00:29:36.744 "method": "bdev_nvme_set_hotplug", 00:29:36.744 "params": { 00:29:36.744 "period_us": 100000, 00:29:36.744 "enable": false 00:29:36.744 } 00:29:36.744 }, 00:29:36.744 { 00:29:36.744 "method": "bdev_wait_for_examine" 00:29:36.744 } 00:29:36.744 ] 00:29:36.744 }, 00:29:36.744 { 00:29:36.744 "subsystem": "nbd", 00:29:36.744 "config": [] 00:29:36.744 } 00:29:36.744 ] 00:29:36.744 }' 00:29:36.744 09:44:09 keyring_file -- keyring/file.sh@114 -- # killprocess 656802 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 656802 ']' 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@952 -- # kill -0 656802 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656802 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656802' 00:29:36.744 killing process with pid 656802 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@967 -- # kill 656802 00:29:36.744 Received shutdown signal, test time was about 1.000000 seconds 00:29:36.744 00:29:36.744 Latency(us) 00:29:36.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.744 =================================================================================================================== 00:29:36.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.744 09:44:09 keyring_file -- common/autotest_common.sh@972 -- # wait 656802 00:29:37.002 09:44:09 keyring_file -- keyring/file.sh@117 -- # bperfpid=658266 00:29:37.002 09:44:09 keyring_file -- keyring/file.sh@119 -- # waitforlisten 658266 /var/tmp/bperf.sock 00:29:37.002 09:44:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 658266 ']' 00:29:37.002 09:44:09 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:37.002 09:44:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:37.002 09:44:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.002 09:44:09 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:37.002 "subsystems": [ 00:29:37.002 { 00:29:37.002 "subsystem": "keyring", 00:29:37.002 "config": [ 00:29:37.002 { 00:29:37.002 "method": "keyring_file_add_key", 00:29:37.002 "params": { 00:29:37.002 "name": "key0", 00:29:37.002 "path": "/tmp/tmp.DXWhfo7Jqk" 00:29:37.002 } 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "method": "keyring_file_add_key", 00:29:37.002 "params": { 00:29:37.002 "name": "key1", 00:29:37.002 "path": "/tmp/tmp.JLk1Q0KnwR" 00:29:37.002 } 00:29:37.002 } 00:29:37.002 ] 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "subsystem": "iobuf", 00:29:37.002 "config": [ 00:29:37.002 { 00:29:37.002 "method": "iobuf_set_options", 00:29:37.002 "params": { 00:29:37.002 "small_pool_count": 8192, 00:29:37.002 "large_pool_count": 1024, 00:29:37.002 "small_bufsize": 8192, 00:29:37.002 "large_bufsize": 135168 00:29:37.002 } 00:29:37.002 } 00:29:37.002 ] 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "subsystem": "sock", 00:29:37.002 "config": [ 00:29:37.002 { 00:29:37.002 "method": "sock_set_default_impl", 00:29:37.002 "params": { 00:29:37.002 "impl_name": "posix" 00:29:37.002 } 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "method": "sock_impl_set_options", 00:29:37.002 "params": { 00:29:37.002 "impl_name": "ssl", 00:29:37.002 "recv_buf_size": 4096, 00:29:37.002 "send_buf_size": 4096, 00:29:37.002 "enable_recv_pipe": true, 00:29:37.002 "enable_quickack": false, 00:29:37.002 "enable_placement_id": 0, 00:29:37.002 "enable_zerocopy_send_server": true, 00:29:37.002 "enable_zerocopy_send_client": false, 00:29:37.002 "zerocopy_threshold": 0, 00:29:37.002 "tls_version": 0, 00:29:37.002 "enable_ktls": false 00:29:37.002 } 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "method": "sock_impl_set_options", 00:29:37.002 "params": { 00:29:37.002 "impl_name": "posix", 00:29:37.002 "recv_buf_size": 2097152, 00:29:37.002 "send_buf_size": 2097152, 00:29:37.002 "enable_recv_pipe": true, 00:29:37.002 "enable_quickack": false, 00:29:37.002 "enable_placement_id": 0, 00:29:37.002 "enable_zerocopy_send_server": true, 00:29:37.002 "enable_zerocopy_send_client": false, 00:29:37.002 "zerocopy_threshold": 0, 00:29:37.002 "tls_version": 0, 00:29:37.002 "enable_ktls": false 00:29:37.002 } 00:29:37.002 } 00:29:37.002 ] 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "subsystem": "vmd", 00:29:37.002 "config": [] 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "subsystem": "accel", 00:29:37.002 "config": [ 00:29:37.002 { 00:29:37.002 "method": "accel_set_options", 00:29:37.002 "params": { 00:29:37.002 "small_cache_size": 128, 00:29:37.002 "large_cache_size": 16, 00:29:37.002 "task_count": 2048, 00:29:37.002 "sequence_count": 2048, 00:29:37.002 "buf_count": 2048 00:29:37.002 } 00:29:37.002 } 00:29:37.002 ] 00:29:37.002 }, 00:29:37.002 { 00:29:37.002 "subsystem": "bdev", 00:29:37.002 "config": [ 00:29:37.002 { 00:29:37.002 "method": "bdev_set_options", 00:29:37.003 "params": { 00:29:37.003 "bdev_io_pool_size": 65535, 00:29:37.003 "bdev_io_cache_size": 256, 00:29:37.003 "bdev_auto_examine": true, 00:29:37.003 "iobuf_small_cache_size": 128, 00:29:37.003 "iobuf_large_cache_size": 16 00:29:37.003 } 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "method": "bdev_raid_set_options", 00:29:37.003 "params": { 00:29:37.003 "process_window_size_kb": 1024, 00:29:37.003 "process_max_bandwidth_mb_sec": 0 00:29:37.003 } 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "method": "bdev_iscsi_set_options", 00:29:37.003 "params": { 00:29:37.003 "timeout_sec": 30 00:29:37.003 } 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "method": "bdev_nvme_set_options", 00:29:37.003 "params": { 00:29:37.003 "action_on_timeout": "none", 00:29:37.003 "timeout_us": 0, 00:29:37.003 "timeout_admin_us": 0, 00:29:37.003 "keep_alive_timeout_ms": 10000, 00:29:37.003 "arbitration_burst": 0, 00:29:37.003 "low_priority_weight": 0, 00:29:37.003 "medium_priority_weight": 0, 00:29:37.003 "high_priority_weight": 0, 00:29:37.003 "nvme_adminq_poll_period_us": 10000, 00:29:37.003 "nvme_ioq_poll_period_us": 0, 00:29:37.003 "io_queue_requests": 512, 00:29:37.003 "delay_cmd_submit": true, 00:29:37.003 "transport_retry_count": 4, 00:29:37.003 "bdev_retry_count": 3, 00:29:37.003 "transport_ack_timeout": 0, 00:29:37.003 "ctrlr_loss_timeout_sec": 0, 00:29:37.003 "reconnect_delay_sec": 0, 00:29:37.003 "fast_io_fail_timeout_sec": 0, 00:29:37.003 "disable_auto_failback": false, 00:29:37.003 "generate_uuids": false, 00:29:37.003 "transport_tos": 0, 00:29:37.003 "nvme_error_stat": false, 00:29:37.003 "rdma_srq_size": 0, 00:29:37.003 "io_path_stat": false, 00:29:37.003 09:44:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:37.003 "allow_accel_sequence": false, 00:29:37.003 "rdma_max_cq_size": 0, 00:29:37.003 "rdma_cm_event_timeout_ms": 0, 00:29:37.003 "dhchap_digests": [ 00:29:37.003 "sha256", 00:29:37.003 "sha384", 00:29:37.003 "sha512" 00:29:37.003 ], 00:29:37.003 "dhchap_dhgroups": [ 00:29:37.003 "null", 00:29:37.003 "ffdhe2048", 00:29:37.003 "ffdhe3072", 00:29:37.003 "ffdhe4096", 00:29:37.003 "ffdhe6144", 00:29:37.003 "ffdhe8192" 00:29:37.003 ] 00:29:37.003 } 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "method": "bdev_nvme_attach_controller", 00:29:37.003 "params": { 00:29:37.003 "name": "nvme0", 00:29:37.003 "trtype": "TCP", 00:29:37.003 "adrfam": "IPv4", 00:29:37.003 "traddr": "127.0.0.1", 00:29:37.003 "trsvcid": "4420", 00:29:37.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.003 "prchk_reftag": false, 00:29:37.003 "prchk_guard": false, 00:29:37.003 "ctrlr_loss_timeout_sec": 0, 00:29:37.003 "reconnect_delay_sec": 0, 00:29:37.003 "fast_io_fail_timeout_sec": 0, 00:29:37.003 "psk": "key0", 00:29:37.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.003 "hdgst": false, 00:29:37.003 "ddgst": false 00:29:37.003 } 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "method": "bdev_nvme_set_hotplug", 00:29:37.003 "params": { 00:29:37.003 "period_us": 100000, 00:29:37.003 "enable": false 00:29:37.003 } 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "method": "bdev_wait_for_examine" 00:29:37.003 } 00:29:37.003 ] 00:29:37.003 }, 00:29:37.003 { 00:29:37.003 "subsystem": "nbd", 00:29:37.003 "config": [] 00:29:37.003 } 00:29:37.003 ] 00:29:37.003 }' 00:29:37.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:37.003 09:44:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.003 09:44:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:37.003 [2024-07-25 09:44:09.673740] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:29:37.003 [2024-07-25 09:44:09.673838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658266 ] 00:29:37.003 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.003 [2024-07-25 09:44:09.734614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.261 [2024-07-25 09:44:09.847638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.519 [2024-07-25 09:44:10.048964] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:38.086 09:44:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.086 09:44:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:38.086 09:44:10 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:38.086 09:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.086 09:44:10 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:38.344 09:44:10 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:38.344 09:44:10 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:38.344 09:44:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:38.344 09:44:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.344 09:44:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.344 09:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.344 09:44:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.602 09:44:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:38.602 09:44:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:38.602 09:44:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:38.602 09:44:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.602 09:44:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.602 09:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.602 09:44:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:38.860 09:44:11 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:38.860 09:44:11 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:38.860 09:44:11 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:38.860 09:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:39.118 09:44:11 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:39.118 09:44:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:39.118 09:44:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.DXWhfo7Jqk /tmp/tmp.JLk1Q0KnwR 00:29:39.118 09:44:11 keyring_file -- keyring/file.sh@20 -- # killprocess 658266 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 658266 ']' 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 658266 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 658266 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 658266' 00:29:39.118 killing process with pid 658266 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@967 -- # kill 658266 00:29:39.118 Received shutdown signal, test time was about 1.000000 seconds 00:29:39.118 00:29:39.118 Latency(us) 00:29:39.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.118 =================================================================================================================== 00:29:39.118 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:39.118 09:44:11 keyring_file -- common/autotest_common.sh@972 -- # wait 658266 00:29:39.374 09:44:11 keyring_file -- keyring/file.sh@21 -- # killprocess 656792 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 656792 ']' 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 656792 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656792 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656792' 00:29:39.374 killing process with pid 656792 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@967 -- # kill 656792 00:29:39.374 [2024-07-25 09:44:11.958152] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:39.374 09:44:11 keyring_file -- common/autotest_common.sh@972 -- # wait 656792 00:29:39.937 00:29:39.937 real 0m14.190s 00:29:39.937 user 0m35.593s 00:29:39.937 sys 0m3.153s 00:29:39.937 09:44:12 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.937 09:44:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:39.937 ************************************ 00:29:39.937 END TEST keyring_file 00:29:39.937 ************************************ 00:29:39.937 09:44:12 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:39.937 09:44:12 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:39.937 09:44:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:39.937 09:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.937 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:29:39.937 ************************************ 00:29:39.937 START TEST keyring_linux 00:29:39.937 ************************************ 00:29:39.937 09:44:12 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:39.937 * Looking for test storage... 00:29:39.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.937 09:44:12 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.937 09:44:12 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.937 09:44:12 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.937 09:44:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.937 09:44:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.937 09:44:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.937 09:44:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:39.937 09:44:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:39.937 /tmp/:spdk-test:key0 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:39.937 09:44:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:39.937 09:44:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:39.937 /tmp/:spdk-test:key1 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=658632 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:39.937 09:44:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 658632 00:29:39.937 09:44:12 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 658632 ']' 00:29:39.937 09:44:12 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.937 09:44:12 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.937 09:44:12 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.938 09:44:12 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.938 09:44:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:39.938 [2024-07-25 09:44:12.643204] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:29:39.938 [2024-07-25 09:44:12.643300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658632 ] 00:29:40.195 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.195 [2024-07-25 09:44:12.701904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.195 [2024-07-25 09:44:12.821852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:40.452 09:44:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:40.452 [2024-07-25 09:44:13.068874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.452 null0 00:29:40.452 [2024-07-25 09:44:13.100872] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:40.452 [2024-07-25 09:44:13.101309] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.452 09:44:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:40.452 490114014 00:29:40.452 09:44:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:40.452 258353614 00:29:40.452 09:44:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=658756 00:29:40.452 09:44:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:40.452 09:44:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 658756 /var/tmp/bperf.sock 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 658756 ']' 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:40.452 09:44:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.453 09:44:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:40.453 09:44:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:40.453 [2024-07-25 09:44:13.170461] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:29:40.453 [2024-07-25 09:44:13.170546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658756 ] 00:29:40.713 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.713 [2024-07-25 09:44:13.230132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.713 [2024-07-25 09:44:13.345163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.646 09:44:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:41.646 09:44:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:41.646 09:44:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:41.646 09:44:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:41.646 09:44:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:41.646 09:44:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:42.211 09:44:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:42.211 09:44:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:42.212 [2024-07-25 09:44:14.898692] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:42.469 nvme0n1 00:29:42.469 09:44:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:42.469 09:44:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:42.469 09:44:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:42.469 09:44:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:42.469 09:44:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.469 09:44:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:42.728 09:44:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:42.728 09:44:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:42.728 09:44:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:42.728 09:44:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:42.728 09:44:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.728 09:44:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.728 09:44:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@25 -- # sn=490114014 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 490114014 == \4\9\0\1\1\4\0\1\4 ]] 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 490114014 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:42.985 09:44:15 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.985 Running I/O for 1 seconds... 00:29:43.917 00:29:43.917 Latency(us) 00:29:43.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.917 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:43.917 nvme0n1 : 1.01 9788.19 38.24 0.00 0.00 12989.59 9806.13 24855.13 00:29:43.917 =================================================================================================================== 00:29:43.918 Total : 9788.19 38.24 0.00 0.00 12989.59 9806.13 24855.13 00:29:43.918 0 00:29:43.918 09:44:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:43.918 09:44:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:44.175 09:44:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:44.175 09:44:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:44.175 09:44:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:44.175 09:44:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:44.175 09:44:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:44.175 09:44:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.432 09:44:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:44.432 09:44:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:44.432 09:44:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:44.432 09:44:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:44.433 09:44:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:44.433 09:44:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:44.690 [2024-07-25 09:44:17.366759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:44.690 [2024-07-25 09:44:17.366992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x612890 (107): Transport endpoint is not connected 00:29:44.690 [2024-07-25 09:44:17.367981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x612890 (9): Bad file descriptor 00:29:44.690 [2024-07-25 09:44:17.368980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:44.690 [2024-07-25 09:44:17.369002] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:44.690 [2024-07-25 09:44:17.369017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:44.690 request: 00:29:44.690 { 00:29:44.690 "name": "nvme0", 00:29:44.690 "trtype": "tcp", 00:29:44.690 "traddr": "127.0.0.1", 00:29:44.690 "adrfam": "ipv4", 00:29:44.690 "trsvcid": "4420", 00:29:44.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.690 "prchk_reftag": false, 00:29:44.690 "prchk_guard": false, 00:29:44.690 "hdgst": false, 00:29:44.690 "ddgst": false, 00:29:44.690 "psk": ":spdk-test:key1", 00:29:44.690 "method": "bdev_nvme_attach_controller", 00:29:44.690 "req_id": 1 00:29:44.690 } 00:29:44.690 Got JSON-RPC error response 00:29:44.690 response: 00:29:44.690 { 00:29:44.690 "code": -5, 00:29:44.690 "message": "Input/output error" 00:29:44.690 } 00:29:44.690 09:44:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:44.690 09:44:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:44.690 09:44:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:44.690 09:44:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@33 -- # sn=490114014 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 490114014 00:29:44.690 1 links removed 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@33 -- # sn=258353614 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 258353614 00:29:44.690 1 links removed 00:29:44.690 09:44:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 658756 00:29:44.690 09:44:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 658756 ']' 00:29:44.690 09:44:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 658756 00:29:44.691 09:44:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:44.691 09:44:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.691 09:44:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 658756 00:29:44.948 09:44:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.948 09:44:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.948 09:44:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 658756' 00:29:44.948 killing process with pid 658756 00:29:44.948 09:44:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 658756 00:29:44.948 Received shutdown signal, test time was about 1.000000 seconds 00:29:44.948 00:29:44.948 Latency(us) 00:29:44.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.948 =================================================================================================================== 00:29:44.948 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.948 09:44:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 658756 00:29:45.205 09:44:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 658632 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 658632 ']' 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 658632 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 658632 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 658632' 00:29:45.205 killing process with pid 658632 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 658632 00:29:45.205 09:44:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 658632 00:29:45.769 00:29:45.769 real 0m5.745s 00:29:45.769 user 0m11.432s 00:29:45.769 sys 0m1.561s 00:29:45.769 09:44:18 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.769 09:44:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:45.769 ************************************ 00:29:45.769 END TEST keyring_linux 00:29:45.769 ************************************ 00:29:45.769 09:44:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:45.769 09:44:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:45.769 09:44:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:45.769 09:44:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:45.769 09:44:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:45.770 09:44:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:45.770 09:44:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:45.770 09:44:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:45.770 09:44:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.770 09:44:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:45.770 09:44:18 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:29:45.770 09:44:18 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:29:45.770 09:44:18 -- common/autotest_common.sh@10 -- # set +x 00:29:47.667 INFO: APP EXITING 00:29:47.667 INFO: killing all VMs 00:29:47.667 INFO: killing vhost app 00:29:47.667 INFO: EXIT DONE 00:29:48.600 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:29:48.600 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:48.600 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:48.600 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:48.600 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:48.600 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:48.600 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:48.600 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:48.600 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:48.600 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:48.600 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:48.600 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:48.600 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:48.600 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:48.600 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:48.600 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:48.858 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:49.793 Cleaning 00:29:49.793 Removing: /var/run/dpdk/spdk0/config 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:49.793 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:49.793 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:49.793 Removing: /var/run/dpdk/spdk1/config 00:29:49.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:49.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:49.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:50.053 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:50.053 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:50.053 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:50.053 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:50.053 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:50.053 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:50.053 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:50.053 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:50.053 Removing: /var/run/dpdk/spdk2/config 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:50.053 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:50.053 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:50.053 Removing: /var/run/dpdk/spdk3/config 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:50.053 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:50.053 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:50.053 Removing: /var/run/dpdk/spdk4/config 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:50.053 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:50.053 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:50.053 Removing: /dev/shm/bdev_svc_trace.1 00:29:50.053 Removing: /dev/shm/nvmf_trace.0 00:29:50.053 Removing: /dev/shm/spdk_tgt_trace.pid395576 00:29:50.053 Removing: /var/run/dpdk/spdk0 00:29:50.053 Removing: /var/run/dpdk/spdk1 00:29:50.053 Removing: /var/run/dpdk/spdk2 00:29:50.053 Removing: /var/run/dpdk/spdk3 00:29:50.053 Removing: /var/run/dpdk/spdk4 00:29:50.053 Removing: /var/run/dpdk/spdk_pid393767 00:29:50.053 Removing: /var/run/dpdk/spdk_pid394633 00:29:50.053 Removing: /var/run/dpdk/spdk_pid395576 00:29:50.053 Removing: /var/run/dpdk/spdk_pid396011 00:29:50.053 Removing: /var/run/dpdk/spdk_pid396698 00:29:50.053 Removing: /var/run/dpdk/spdk_pid396851 00:29:50.053 Removing: /var/run/dpdk/spdk_pid397567 00:29:50.053 Removing: /var/run/dpdk/spdk_pid397697 00:29:50.053 Removing: /var/run/dpdk/spdk_pid397947 00:29:50.053 Removing: /var/run/dpdk/spdk_pid399274 00:29:50.053 Removing: /var/run/dpdk/spdk_pid400310 00:29:50.053 Removing: /var/run/dpdk/spdk_pid400502 00:29:50.053 Removing: /var/run/dpdk/spdk_pid400812 00:29:50.053 Removing: /var/run/dpdk/spdk_pid401025 00:29:50.053 Removing: /var/run/dpdk/spdk_pid401217 00:29:50.053 Removing: /var/run/dpdk/spdk_pid401402 00:29:50.053 Removing: /var/run/dpdk/spdk_pid401650 00:29:50.053 Removing: /var/run/dpdk/spdk_pid401833 00:29:50.053 Removing: /var/run/dpdk/spdk_pid402088 00:29:50.053 Removing: /var/run/dpdk/spdk_pid404524 00:29:50.053 Removing: /var/run/dpdk/spdk_pid404701 00:29:50.053 Removing: /var/run/dpdk/spdk_pid404868 00:29:50.053 Removing: /var/run/dpdk/spdk_pid405001 00:29:50.053 Removing: /var/run/dpdk/spdk_pid405309 00:29:50.053 Removing: /var/run/dpdk/spdk_pid405431 00:29:50.053 Removing: /var/run/dpdk/spdk_pid405743 00:29:50.053 Removing: /var/run/dpdk/spdk_pid405895 00:29:50.053 Removing: /var/run/dpdk/spdk_pid406209 00:29:50.053 Removing: /var/run/dpdk/spdk_pid406289 00:29:50.053 Removing: /var/run/dpdk/spdk_pid406453 00:29:50.053 Removing: /var/run/dpdk/spdk_pid406507 00:29:50.053 Removing: /var/run/dpdk/spdk_pid406949 00:29:50.053 Removing: /var/run/dpdk/spdk_pid407264 00:29:50.053 Removing: /var/run/dpdk/spdk_pid407764 00:29:50.053 Removing: /var/run/dpdk/spdk_pid407983 00:29:50.053 Removing: /var/run/dpdk/spdk_pid408128 00:29:50.053 Removing: /var/run/dpdk/spdk_pid408194 00:29:50.053 Removing: /var/run/dpdk/spdk_pid408471 00:29:50.053 Removing: /var/run/dpdk/spdk_pid408627 00:29:50.053 Removing: /var/run/dpdk/spdk_pid408786 00:29:50.053 Removing: /var/run/dpdk/spdk_pid409057 00:29:50.053 Removing: /var/run/dpdk/spdk_pid409221 00:29:50.053 Removing: /var/run/dpdk/spdk_pid409372 00:29:50.053 Removing: /var/run/dpdk/spdk_pid409651 00:29:50.053 Removing: /var/run/dpdk/spdk_pid409807 00:29:50.053 Removing: /var/run/dpdk/spdk_pid409979 00:29:50.053 Removing: /var/run/dpdk/spdk_pid410237 00:29:50.053 Removing: /var/run/dpdk/spdk_pid410402 00:29:50.053 Removing: /var/run/dpdk/spdk_pid410580 00:29:50.053 Removing: /var/run/dpdk/spdk_pid410832 00:29:50.053 Removing: /var/run/dpdk/spdk_pid410987 00:29:50.053 Removing: /var/run/dpdk/spdk_pid411224 00:29:50.053 Removing: /var/run/dpdk/spdk_pid411418 00:29:50.053 Removing: /var/run/dpdk/spdk_pid411584 00:29:50.053 Removing: /var/run/dpdk/spdk_pid411854 00:29:50.053 Removing: /var/run/dpdk/spdk_pid412018 00:29:50.053 Removing: /var/run/dpdk/spdk_pid412174 00:29:50.053 Removing: /var/run/dpdk/spdk_pid412365 00:29:50.053 Removing: /var/run/dpdk/spdk_pid412569 00:29:50.053 Removing: /var/run/dpdk/spdk_pid414782 00:29:50.053 Removing: /var/run/dpdk/spdk_pid417414 00:29:50.053 Removing: /var/run/dpdk/spdk_pid424251 00:29:50.053 Removing: /var/run/dpdk/spdk_pid424662 00:29:50.053 Removing: /var/run/dpdk/spdk_pid427164 00:29:50.053 Removing: /var/run/dpdk/spdk_pid427334 00:29:50.053 Removing: /var/run/dpdk/spdk_pid429951 00:29:50.053 Removing: /var/run/dpdk/spdk_pid433782 00:29:50.053 Removing: /var/run/dpdk/spdk_pid435860 00:29:50.053 Removing: /var/run/dpdk/spdk_pid442868 00:29:50.053 Removing: /var/run/dpdk/spdk_pid448083 00:29:50.053 Removing: /var/run/dpdk/spdk_pid449279 00:29:50.053 Removing: /var/run/dpdk/spdk_pid449951 00:29:50.053 Removing: /var/run/dpdk/spdk_pid460435 00:29:50.053 Removing: /var/run/dpdk/spdk_pid462719 00:29:50.053 Removing: /var/run/dpdk/spdk_pid489313 00:29:50.053 Removing: /var/run/dpdk/spdk_pid492676 00:29:50.053 Removing: /var/run/dpdk/spdk_pid496420 00:29:50.053 Removing: /var/run/dpdk/spdk_pid500378 00:29:50.053 Removing: /var/run/dpdk/spdk_pid500380 00:29:50.053 Removing: /var/run/dpdk/spdk_pid500926 00:29:50.053 Removing: /var/run/dpdk/spdk_pid501582 00:29:50.053 Removing: /var/run/dpdk/spdk_pid502232 00:29:50.053 Removing: /var/run/dpdk/spdk_pid502636 00:29:50.053 Removing: /var/run/dpdk/spdk_pid502650 00:29:50.053 Removing: /var/run/dpdk/spdk_pid502789 00:29:50.053 Removing: /var/run/dpdk/spdk_pid502924 00:29:50.053 Removing: /var/run/dpdk/spdk_pid502990 00:29:50.053 Removing: /var/run/dpdk/spdk_pid503589 00:29:50.053 Removing: /var/run/dpdk/spdk_pid504240 00:29:50.053 Removing: /var/run/dpdk/spdk_pid504851 00:29:50.053 Removing: /var/run/dpdk/spdk_pid505301 00:29:50.053 Removing: /var/run/dpdk/spdk_pid505305 00:29:50.053 Removing: /var/run/dpdk/spdk_pid505563 00:29:50.053 Removing: /var/run/dpdk/spdk_pid506487 00:29:50.053 Removing: /var/run/dpdk/spdk_pid507297 00:29:50.053 Removing: /var/run/dpdk/spdk_pid513013 00:29:50.053 Removing: /var/run/dpdk/spdk_pid538950 00:29:50.053 Removing: /var/run/dpdk/spdk_pid541815 00:29:50.053 Removing: /var/run/dpdk/spdk_pid542902 00:29:50.054 Removing: /var/run/dpdk/spdk_pid544220 00:29:50.054 Removing: /var/run/dpdk/spdk_pid544362 00:29:50.054 Removing: /var/run/dpdk/spdk_pid544498 00:29:50.054 Removing: /var/run/dpdk/spdk_pid544632 00:29:50.054 Removing: /var/run/dpdk/spdk_pid544951 00:29:50.054 Removing: /var/run/dpdk/spdk_pid546276 00:29:50.054 Removing: /var/run/dpdk/spdk_pid546994 00:29:50.054 Removing: /var/run/dpdk/spdk_pid547326 00:29:50.054 Removing: /var/run/dpdk/spdk_pid549033 00:29:50.054 Removing: /var/run/dpdk/spdk_pid549346 00:29:50.054 Removing: /var/run/dpdk/spdk_pid549905 00:29:50.312 Removing: /var/run/dpdk/spdk_pid552429 00:29:50.312 Removing: /var/run/dpdk/spdk_pid558580 00:29:50.312 Removing: /var/run/dpdk/spdk_pid561252 00:29:50.312 Removing: /var/run/dpdk/spdk_pid565010 00:29:50.312 Removing: /var/run/dpdk/spdk_pid566073 00:29:50.312 Removing: /var/run/dpdk/spdk_pid567675 00:29:50.312 Removing: /var/run/dpdk/spdk_pid570366 00:29:50.312 Removing: /var/run/dpdk/spdk_pid572615 00:29:50.312 Removing: /var/run/dpdk/spdk_pid576944 00:29:50.312 Removing: /var/run/dpdk/spdk_pid576946 00:29:50.312 Removing: /var/run/dpdk/spdk_pid579727 00:29:50.312 Removing: /var/run/dpdk/spdk_pid579984 00:29:50.312 Removing: /var/run/dpdk/spdk_pid580114 00:29:50.312 Removing: /var/run/dpdk/spdk_pid580386 00:29:50.312 Removing: /var/run/dpdk/spdk_pid580512 00:29:50.312 Removing: /var/run/dpdk/spdk_pid583280 00:29:50.312 Removing: /var/run/dpdk/spdk_pid583742 00:29:50.312 Removing: /var/run/dpdk/spdk_pid586402 00:29:50.312 Removing: /var/run/dpdk/spdk_pid588263 00:29:50.312 Removing: /var/run/dpdk/spdk_pid591810 00:29:50.312 Removing: /var/run/dpdk/spdk_pid595251 00:29:50.312 Removing: /var/run/dpdk/spdk_pid601823 00:29:50.312 Removing: /var/run/dpdk/spdk_pid606836 00:29:50.312 Removing: /var/run/dpdk/spdk_pid606840 00:29:50.312 Removing: /var/run/dpdk/spdk_pid619886 00:29:50.312 Removing: /var/run/dpdk/spdk_pid620292 00:29:50.312 Removing: /var/run/dpdk/spdk_pid620700 00:29:50.313 Removing: /var/run/dpdk/spdk_pid621240 00:29:50.313 Removing: /var/run/dpdk/spdk_pid621822 00:29:50.313 Removing: /var/run/dpdk/spdk_pid622228 00:29:50.313 Removing: /var/run/dpdk/spdk_pid622638 00:29:50.313 Removing: /var/run/dpdk/spdk_pid623048 00:29:50.313 Removing: /var/run/dpdk/spdk_pid625546 00:29:50.313 Removing: /var/run/dpdk/spdk_pid625746 00:29:50.313 Removing: /var/run/dpdk/spdk_pid629477 00:29:50.313 Removing: /var/run/dpdk/spdk_pid629645 00:29:50.313 Removing: /var/run/dpdk/spdk_pid631258 00:29:50.313 Removing: /var/run/dpdk/spdk_pid636545 00:29:50.313 Removing: /var/run/dpdk/spdk_pid636631 00:29:50.313 Removing: /var/run/dpdk/spdk_pid640077 00:29:50.313 Removing: /var/run/dpdk/spdk_pid641481 00:29:50.313 Removing: /var/run/dpdk/spdk_pid642879 00:29:50.313 Removing: /var/run/dpdk/spdk_pid643735 00:29:50.313 Removing: /var/run/dpdk/spdk_pid645024 00:29:50.313 Removing: /var/run/dpdk/spdk_pid645905 00:29:50.313 Removing: /var/run/dpdk/spdk_pid651269 00:29:50.313 Removing: /var/run/dpdk/spdk_pid651587 00:29:50.313 Removing: /var/run/dpdk/spdk_pid651978 00:29:50.313 Removing: /var/run/dpdk/spdk_pid653566 00:29:50.313 Removing: /var/run/dpdk/spdk_pid653943 00:29:50.313 Removing: /var/run/dpdk/spdk_pid654338 00:29:50.313 Removing: /var/run/dpdk/spdk_pid656792 00:29:50.313 Removing: /var/run/dpdk/spdk_pid656802 00:29:50.313 Removing: /var/run/dpdk/spdk_pid658266 00:29:50.313 Removing: /var/run/dpdk/spdk_pid658632 00:29:50.313 Removing: /var/run/dpdk/spdk_pid658756 00:29:50.313 Clean 00:29:50.313 09:44:22 -- common/autotest_common.sh@1449 -- # return 0 00:29:50.313 09:44:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:50.313 09:44:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.313 09:44:22 -- common/autotest_common.sh@10 -- # set +x 00:29:50.313 09:44:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:50.313 09:44:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.313 09:44:22 -- common/autotest_common.sh@10 -- # set +x 00:29:50.313 09:44:22 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:50.313 09:44:22 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:50.313 09:44:22 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:50.313 09:44:22 -- spdk/autotest.sh@391 -- # hash lcov 00:29:50.313 09:44:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:50.313 09:44:22 -- spdk/autotest.sh@393 -- # hostname 00:29:50.313 09:44:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:50.571 geninfo: WARNING: invalid characters removed from testname! 00:30:22.634 09:44:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:23.197 09:44:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:26.472 09:44:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:28.997 09:45:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:32.306 09:45:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:35.625 09:45:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:38.149 09:45:10 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:38.407 09:45:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.407 09:45:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:38.407 09:45:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.407 09:45:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.407 09:45:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.407 09:45:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.408 09:45:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.408 09:45:10 -- paths/export.sh@5 -- $ export PATH 00:30:38.408 09:45:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.408 09:45:10 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:38.408 09:45:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:38.408 09:45:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721893510.XXXXXX 00:30:38.408 09:45:10 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721893510.OIgldb 00:30:38.408 09:45:10 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:38.408 09:45:10 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:30:38.408 09:45:10 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:38.408 09:45:10 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:38.408 09:45:10 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:38.408 09:45:10 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:38.408 09:45:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:38.408 09:45:10 -- common/autotest_common.sh@10 -- $ set +x 00:30:38.408 09:45:10 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:38.408 09:45:10 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:38.408 09:45:10 -- pm/common@17 -- $ local monitor 00:30:38.408 09:45:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:38.408 09:45:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:38.408 09:45:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:38.408 09:45:10 -- pm/common@21 -- $ date +%s 00:30:38.408 09:45:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:38.408 09:45:10 -- pm/common@21 -- $ date +%s 00:30:38.408 09:45:10 -- pm/common@25 -- $ sleep 1 00:30:38.408 09:45:10 -- pm/common@21 -- $ date +%s 00:30:38.408 09:45:10 -- pm/common@21 -- $ date +%s 00:30:38.408 09:45:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721893510 00:30:38.408 09:45:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721893510 00:30:38.408 09:45:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721893510 00:30:38.408 09:45:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721893510 00:30:38.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721893510_collect-vmstat.pm.log 00:30:38.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721893510_collect-cpu-load.pm.log 00:30:38.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721893510_collect-cpu-temp.pm.log 00:30:38.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721893510_collect-bmc-pm.bmc.pm.log 00:30:39.342 09:45:11 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:39.342 09:45:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:39.342 09:45:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:39.342 09:45:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:39.342 09:45:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:39.342 09:45:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:39.342 09:45:11 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:39.342 09:45:11 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:39.342 09:45:11 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:39.342 09:45:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:39.342 09:45:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:39.342 09:45:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:39.342 09:45:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:39.342 09:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:39.342 09:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:39.342 09:45:11 -- pm/common@44 -- $ pid=669176 00:30:39.342 09:45:11 -- pm/common@50 -- $ kill -TERM 669176 00:30:39.342 09:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:39.342 09:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:39.342 09:45:11 -- pm/common@44 -- $ pid=669178 00:30:39.342 09:45:11 -- pm/common@50 -- $ kill -TERM 669178 00:30:39.342 09:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:39.342 09:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:39.342 09:45:11 -- pm/common@44 -- $ pid=669180 00:30:39.342 09:45:11 -- pm/common@50 -- $ kill -TERM 669180 00:30:39.342 09:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:39.342 09:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:39.342 09:45:11 -- pm/common@44 -- $ pid=669213 00:30:39.342 09:45:11 -- pm/common@50 -- $ sudo -E kill -TERM 669213 00:30:39.342 + [[ -n 309969 ]] 00:30:39.342 + sudo kill 309969 00:30:39.352 [Pipeline] } 00:30:39.369 [Pipeline] // stage 00:30:39.375 [Pipeline] } 00:30:39.393 [Pipeline] // timeout 00:30:39.399 [Pipeline] } 00:30:39.416 [Pipeline] // catchError 00:30:39.422 [Pipeline] } 00:30:39.439 [Pipeline] // wrap 00:30:39.446 [Pipeline] } 00:30:39.462 [Pipeline] // catchError 00:30:39.472 [Pipeline] stage 00:30:39.474 [Pipeline] { (Epilogue) 00:30:39.488 [Pipeline] catchError 00:30:39.490 [Pipeline] { 00:30:39.506 [Pipeline] echo 00:30:39.508 Cleanup processes 00:30:39.514 [Pipeline] sh 00:30:39.796 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:39.796 669346 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:39.796 669442 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:39.810 [Pipeline] sh 00:30:40.090 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:40.090 ++ grep -v 'sudo pgrep' 00:30:40.090 ++ awk '{print $1}' 00:30:40.090 + sudo kill -9 669346 00:30:40.102 [Pipeline] sh 00:30:40.383 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:50.361 [Pipeline] sh 00:30:50.645 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:50.645 Artifacts sizes are good 00:30:50.660 [Pipeline] archiveArtifacts 00:30:50.667 Archiving artifacts 00:30:50.870 [Pipeline] sh 00:30:51.149 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:51.163 [Pipeline] cleanWs 00:30:51.172 [WS-CLEANUP] Deleting project workspace... 00:30:51.172 [WS-CLEANUP] Deferred wipeout is used... 00:30:51.177 [WS-CLEANUP] done 00:30:51.179 [Pipeline] } 00:30:51.199 [Pipeline] // catchError 00:30:51.211 [Pipeline] sh 00:30:51.525 + logger -p user.info -t JENKINS-CI 00:30:51.532 [Pipeline] } 00:30:51.547 [Pipeline] // stage 00:30:51.553 [Pipeline] } 00:30:51.570 [Pipeline] // node 00:30:51.577 [Pipeline] End of Pipeline 00:30:51.609 Finished: SUCCESS